North Korea Tests AI-Enabled Attack Drones, Orders Rapid AI Development

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

North Korean leader Kim Jong Un supervised tests of AI-enabled attack drones, which successfully destroyed targets, and called for rapid advancement of artificial intelligence technologies in military applications. The event highlights North Korea's prioritization of AI-driven unmanned weapon systems, raising concerns about future risks associated with autonomous military technologies.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI integration in drones used for attack purposes, including suicide drones impacting targets in simulations. These AI-enabled weapons systems are designed for lethal military use, which inherently carries a high risk of causing injury or death (harm to persons) and disruption of critical infrastructure or security. Since the event involves the development and testing of such AI systems with clear potential for harm, but no actual harm is reported yet, it fits the definition of an AI Hazard rather than an AI Incident.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defence

Affected stakeholders
Other

Harm types
Economic/Property

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Kim Jong-un supervisa nuevos drones suicida y apuesta por la implementación de la IA

2025-09-19
Yahoo!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in drones used for attack purposes, including suicide drones impacting targets in simulations. These AI-enabled weapons systems are designed for lethal military use, which inherently carries a high risk of causing injury or death (harm to persons) and disruption of critical infrastructure or security. Since the event involves the development and testing of such AI systems with clear potential for harm, but no actual harm is reported yet, it fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

金正恩视察朝鲜无人航空技术联合体

2025-09-19
新华网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development, testing, and planned expansion of unmanned aerial weapon systems, which are AI systems or AI-enabled systems. The use of such systems in military contexts inherently carries risks of injury, harm, or disruption. Since the article reports on testing and plans for further strengthening these AI-enabled weapons but does not report any actual harm or incident, this qualifies as an AI Hazard rather than an AI Incident. The plausible future harm includes potential injury, disruption, or violations of rights due to autonomous weapons use.
Thumbnail Image

Kim Jong Un Says Advancing AI for Drones Top Military Priority

2025-09-19
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and testing of drones with AI for military use, which are likely to be autonomous or semi-autonomous systems. While no direct harm is reported, the advancement and prioritization of AI-enabled military drones plausibly pose significant future risks, including potential harm to people, infrastructure, or geopolitical stability. Therefore, this event constitutes an AI Hazard due to the credible risk associated with the development and deployment of AI-powered autonomous weapons systems.
Thumbnail Image

Corea del Norte: Kim Jong-un supervisa nuevos drones suicida y apuesta por la implementación de la IA

2025-09-19
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI in military drones, including suicide drones, which are weaponized unmanned systems. The involvement of AI in these drones' operation suggests autonomous or semi-autonomous decision-making capabilities. The development and potential deployment of such AI-enabled lethal systems pose a credible risk of harm to people and communities, constituting a plausible future harm. Since no actual harm is reported yet but the event clearly indicates a credible risk of AI-enabled military harm, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

North Korea's Kim Jong Un Oversees Drone Testing, KCNA Says

2025-09-18
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The involvement of AI in military drones, especially suicide drones and unmanned attack aircraft, represents a significant AI Hazard because such systems could plausibly lead to harm including injury or death, disruption of critical infrastructure, or violations of human rights. Although no specific harm is reported as having occurred yet, the development and testing of AI-powered lethal drones pose credible risks of future AI Incidents. Therefore, this event is best classified as an AI Hazard.
Thumbnail Image

Kim Jong-un supervisa nuevos drones suicida y apuesta por IA

2025-09-19
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in drones designed for attack (suicide drones) and the strategic priority given to AI in military modernization by North Korea. These drones are AI systems as they perform autonomous or semi-autonomous targeting and attack functions. Although the article does not report actual harm or incidents caused by these drones, the development and testing of AI-enabled autonomous weapons with lethal capabilities pose a credible risk of future harm, including injury, disruption, and violations of human rights. Therefore, this qualifies as an AI Hazard under the framework, as the AI system's use could plausibly lead to an AI Incident. The event is not an AI Incident because no harm has yet been reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

Kim Jong Un declares AI military drone development a 'top priority'

2025-09-19
Al Jazeera Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being rapidly developed and prioritized for military drone and unmanned weapons systems in North Korea. These systems qualify as AI systems due to their autonomous or semi-autonomous nature in military applications. While no direct harm is reported yet, the strategic emphasis on AI military drones in a highly militarized and tense geopolitical context implies a credible risk of future harm, including injury, disruption, and violations of human rights. The event is about the development and intended use of AI systems with high potential for misuse and harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

(LEAD) N. Korea's Kim oversees performance test of tactical attack drones | Yonhap News Agency

2025-09-19
Yonhap News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into tactical attack drones being tested and developed by North Korea. While no direct harm or incident is reported, the nature of these AI systems—autonomous or semi-autonomous attack drones—implies a plausible risk of future harm, such as injury or disruption in military conflict. The event is therefore best classified as an AI Hazard, as it involves the development and testing of AI systems that could plausibly lead to significant harm, but no harm has yet occurred or been reported.
Thumbnail Image

N. Korea's Kim oversees performance test of tactical attack drones | Yonhap News Agency

2025-09-18
Yonhap News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology applied to tactical attack drones, which are weapons systems capable of autonomous or semi-autonomous operation. The use of AI in military drones inherently carries a high risk of harm, including injury or death and disruption of peace and security. Since the article reports on performance tests and development without any realized harm or incident, it fits the definition of an AI Hazard rather than an AI Incident. The event highlights the plausible future risk of harm from AI-enabled military drones but does not describe any direct or indirect harm that has occurred yet.
Thumbnail Image

Kim Jong-un supervise des tests de performance de drones d'attaque tactiques | AGENCE DE PRESSE YONHAP

2025-09-19
Agence de presse Yonhap
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-enabled tactical attack drones) in their development and testing stages. While no direct harm or incident is reported, the nature of these AI systems as autonomous or semi-autonomous weapons implies a credible risk of future harm, such as injury or death in military conflict. Therefore, this qualifies as an AI Hazard because the AI system's development and intended use could plausibly lead to an AI Incident in the future. It is not an AI Incident since no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

Kim Jong-un supervisa una prueba de rendimiento de drones de ataque táctico | AGENCIA DE NOTICIAS YONHAP

2025-09-19
Agencia de Noticias Yonhap
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in tactical attack drones, which are weapons systems capable of autonomous or semi-autonomous operation. The development and testing of such AI-enabled military drones represent a credible risk of future harm, including injury or death, disruption of critical infrastructure, and broader harm to communities. Although no specific harm has yet occurred from this test, the event clearly indicates the potential for AI-driven military applications that could lead to AI Incidents. Since harm is not reported as having occurred yet, the classification is AI Hazard rather than AI Incident.
Thumbnail Image

金正恩指导无人武器装备性能试验

2025-09-19
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems integrated into unmanned weapon systems, which are explicitly mentioned as being tested and improved for combat use. The involvement of AI in autonomous or semi-autonomous weaponry poses a credible risk of harm to people and communities due to their military application. Although no specific harm is reported as having occurred yet, the described activities plausibly lead to future AI incidents involving injury, harm, or disruption. Therefore, this event qualifies as an AI Hazard under the framework, as it plausibly leads to AI incidents through the development and deployment of AI-enabled autonomous weapons.
Thumbnail Image

North's Kim oversees performance test of tactical attack drones

2025-09-19
중앙일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology applied to tactical attack drones, which are unmanned aerial vehicles used for military combat. The use and development of AI in autonomous or semi-autonomous weapons systems inherently carry risks of injury, death, and disruption of critical infrastructure. While the article does not report a specific incident of harm, the testing and advancement of such AI-enabled weapons systems constitute a plausible future risk of harm. According to the OECD framework, the development and use of AI-powered autonomous weapons with combat capabilities are classified as an AI Hazard due to their potential to cause significant harm if deployed.
Thumbnail Image

Kim Jong Un supervise un essai inédit de drones d'attaque et appelle à développer l'IA

2025-09-19
Ouest France
Why's our monitor labelling this an incident or hazard?
The article describes the testing of large attack drones, which are likely equipped with AI for autonomous or semi-autonomous operation, and the explicit call by Kim Jong Un to develop AI technology for military purposes. The use of AI-enabled attack drones in a military context poses a credible risk of harm, including injury or death, disruption of security, and escalation of conflict. Although no specific harm is reported as having occurred during the test, the development and deployment of AI-powered attack drones constitute a plausible future risk of significant harm. Therefore, this event qualifies as an AI Hazard due to the credible potential for AI-enabled military drones to cause harm.
Thumbnail Image

金正恩视察朝鲜无人航空技术联合体

2025-09-19
news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development, testing, and planned expansion of AI-enabled unmanned aerial weapon systems, which are AI systems by definition. Although no harm has yet occurred, the military nature and autonomous capabilities of these drones imply a credible risk of future harm, such as injury or violations of rights, making this an AI Hazard. There is no indication of realized harm or incident, so it cannot be classified as an AI Incident. The article is not merely complementary information since it focuses on the development and testing of potentially harmful AI systems rather than updates or responses to past incidents.
Thumbnail Image

N. Korea's Kim calls for increased production of AI-controlled drones

2025-09-19
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology for controlling unmanned attack drones, which are weapons systems. The development and production of such AI-enabled military drones inherently carry a credible risk of causing harm in the future, including injury or death, disruption of security, and violations of human rights. Since no actual harm is reported yet, but the event plausibly leads to significant harm, it fits the definition of an AI Hazard rather than an AI Incident. The focus is on the development and expansion of AI-controlled weapons, which is a credible future risk.
Thumbnail Image

Kim Jong-un supervise le test de performance de drones d'attaque tactiques

2025-09-19
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technologies are being integrated into tactical attack drones, which are weapons capable of causing injury or harm. The event involves the use and development of AI systems in a military context with clear potential for harm. Since no actual harm is reported yet, but the plausible future harm is significant, this qualifies as an AI Hazard under the framework. The mere development and testing of AI-enabled autonomous weapons systems is recognized as a credible risk for future AI incidents.
Thumbnail Image

Kim Jong Un supervisa una prueba de drones de ataque táctico

2025-09-18
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI-enhanced military drones capable of attack missions, which are inherently linked to potential harm to human life and security. Although no specific incident of harm is reported, the use of AI in weaponized drones poses a credible risk of causing injury or harm in future conflicts. Therefore, this qualifies as an AI Hazard under the framework, as it plausibly could lead to an AI Incident involving harm to people or communities.
Thumbnail Image

N. Korean Leader Oversees Performance Test of Tactical Attack Drones

2025-09-19
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies to enhance the operational capabilities of unmanned tactical attack drones. Although no harm has yet occurred, the development and testing of AI-enabled military drones with attack capabilities represent a credible risk of future harm, including injury or harm to persons and disruption of critical infrastructure. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the AI system's development and intended use in military applications.
Thumbnail Image

Kim Jong Un calls AI-powered drone development top military priority

2025-09-19
Business Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and operational use of AI-powered unmanned aerial vehicles (drones) for military purposes by North Korea. Although no direct harm has yet been reported, the nature of these AI systems—combat and reconnaissance drones—implies a plausible risk of causing injury, disruption, or other harms in future military conflicts. The event thus fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident involving significant harm.
Thumbnail Image

North Korea's Kim Jong Un supervises drone testing: KCNA

2025-09-19
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in suicide drones and the intention to strengthen AI capabilities in unmanned aerial vehicles. These drones are military weapons with potential for lethal use, and the development and enhancement of AI in such systems plausibly pose significant risks of harm, including injury or death, disruption, and violations of human rights. Although no specific harm is reported as having occurred yet, the nature of the AI-enabled weapon systems and their development constitute a credible risk of future harm, qualifying this as an AI Hazard.
Thumbnail Image

Kim Jong-un supervisa nuevos drones suicida y apuesta por la implementación de la IA

2025-09-19
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI-enabled military drones, specifically suicide drones with AI capabilities. While no direct harm is reported yet, the deployment of AI in autonomous weapons systems poses a credible risk of future harm, including injury, disruption, or violations of human rights. Therefore, this situation qualifies as an AI Hazard because the AI system's use in military drones could plausibly lead to an AI Incident in the future.
Thumbnail Image

North Korea's Kim Jong Un oversees drone testing, KCNA says

2025-09-19
ThePrint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into suicide drones, which are inherently lethal autonomous systems. The involvement of AI in enhancing these drones' capabilities indicates a direct link to potential harm through autonomous weaponry. Since the article does not report any actual harm or incident but highlights ongoing development and testing, this qualifies as an AI Hazard due to the credible risk of future harm from AI-enabled autonomous weapons.
Thumbnail Image

Кім Чен Ин поспостерігав за випробуванням нових дронів у КНДР: порадив "прикрутити" їм штучний інтелект

2025-09-19
ТСН.ua
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to enhance military drones, which are inherently AI systems due to their autonomous or semi-autonomous operational nature. The event involves the development and intended use of AI systems in a military context, which could plausibly lead to harm such as injury or violations of human rights. Since no actual harm has been reported yet, but the potential for significant future harm is credible, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Кім Чен Ин наказав випускати БПЛА зі штучним інтелектом

2025-09-19
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to improve strike and reconnaissance drones, which are military autonomous systems. The development and deployment of AI-powered weaponized drones pose a credible risk of harm, including injury or harm to persons and disruption of security. Although no specific harm has yet occurred, the event plausibly leads to AI incidents due to the potential use of AI in lethal autonomous weapons. Therefore, this qualifies as an AI Hazard under the framework.
Thumbnail Image

Кім Чен Ин розпорядився посилити можливості ударних дронів за допомогою ШІ

2025-09-19
РБК-Украина
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into military strike drones, which are weaponized autonomous or semi-autonomous systems. The enhancement of these drones with AI directly relates to their operational capabilities in combat, which could plausibly lead to harm including injury or death, disruption, or other significant harms. Although no specific harm has yet occurred as per the article, the deployment and enhancement of AI-powered strike drones represent a credible and significant risk of future harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from AI-enabled military drones.
Thumbnail Image

North Korea's Kim oversees drone test, orders AI development

2025-09-19
Pulse24.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development in attack drones with autonomous mission execution capabilities, which are AI systems. The event involves the use and development of these AI systems in a military context, which could plausibly lead to harm such as injury or death. No direct harm is reported from this specific test, so it is not an AI Incident. However, the credible risk of future harm from autonomous attack drones justifies classification as an AI Hazard.
Thumbnail Image

N. Korea's Kim oversees performance test of tactical attack drones

2025-09-19
The Korea Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into tactical attack drones tested by North Korea, which are intended for combat and reconnaissance. While no direct harm is reported from these tests, the nature of AI-enabled attack drones inherently carries a credible risk of causing injury, death, or broader security harms if deployed in conflict. The development and production of such AI-powered weapons systems thus represent a plausible future risk of AI incidents. Since no actual harm has yet occurred or been reported, the event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

11:19 金正恩视察朝鲜无人航空技术联合体

2025-09-19
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions unmanned aerial vehicles (drones) used for reconnaissance and attack purposes, which are AI systems by definition due to their autonomous or semi-autonomous capabilities. The development and testing of such unmanned weapon systems directly relate to potential harm through their military use, including injury, disruption, or harm to communities. Although no specific harm is reported as having occurred, the event involves the use and development of AI-enabled autonomous weapons with clear potential for significant harm, qualifying it as an AI Hazard.
Thumbnail Image

金正恩视察无人攻击机性能测试 下令推进AI军事应用

2025-09-19
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology applied to unmanned attack drones capable of autonomous mission execution and target destruction, which qualifies as an AI system. The event involves the use and development of such AI systems for military purposes. While no direct harm is reported yet, the nature of autonomous lethal drones inherently carries a plausible risk of causing injury, death, or broader security harms. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to persons or communities.
Thumbnail Image

North Korea's Kim oversees drone test, orders AI development

2025-09-19
EWN Traffic
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of attack drones with AI capabilities, specifically autonomous mission execution and improved lethality. The development and deployment of AI-enabled military drones with offensive capabilities pose a credible risk of harm to people and communities, constituting a plausible future harm. Although no specific harm has yet occurred or been reported, the nature of the AI system's intended use in military attack drones and the emphasis on rapid AI development and mass production indicate a credible potential for significant harm. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is plausible but not yet realized.
Thumbnail Image

Kim Jong Un Advances North Korea's Drone Capabilities | Technology

2025-09-19
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in North Korea's development of unmanned attack and reconnaissance drones, which are military AI systems. The development and planned strengthening of these AI-enabled drones pose a credible risk of future harm, including injury, disruption, or violations of rights, given their offensive military nature. Although no specific harm has yet been reported, the event plausibly leads to an AI Incident in the future due to the potential use of AI-powered autonomous weapons. Therefore, this qualifies as an AI Hazard under the framework.
Thumbnail Image

Kim Jong Un Advances North Korea's Drone Capabilities with AI | Technology

2025-09-18
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to enhance North Korea's unmanned attack and reconnaissance drones, which are military systems capable of causing physical harm and disruption. The event involves the use and development of AI systems in a military context with clear potential for harm. Since no actual harm is reported yet but the development and testing of these AI-enabled attack drones plausibly lead to future harm, this qualifies as an AI Hazard under the framework. There is no indication of realized harm or incident at this stage, so it is not an AI Incident. The focus is on the potential threat posed by these AI-enhanced drones.
Thumbnail Image

Кім Чен Ин наказав впровадити дрони з ШІ

2025-09-19
Gazeta.ua
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies to improve the combat and reconnaissance capabilities of military drones, which are weaponized systems. The development and deployment of AI-enabled armed drones pose a credible risk of harm to people and communities, including potential injury, loss of life, or escalation of conflict. Although no specific harm has yet occurred as per the article, the nature of these AI systems and their intended military use plausibly lead to significant harm. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm from AI-enabled autonomous weapons.
Thumbnail Image

Кім Чен Ин наказав впроваджувати безпілотники зі штучним інтелектом - ЗМІ

2025-09-19
УКРІНФОРМ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to enhance military drones, which are AI systems by definition due to their autonomous or semi-autonomous operational capabilities. The event concerns the development and intended use of these AI systems in a military context, which could plausibly lead to significant harms including injury, disruption, or violations of human rights. Since no actual harm has been reported yet but the potential for harm is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Kim Jong-un supervisa nuevos drones suicida y apuesta por la implementación de la IA

2025-09-19
La opinion de Murcia
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems in military drones capable of autonomous or semi-autonomous attack functions (suicide drones). The deployment and testing of such AI-enabled weapon systems pose a credible risk of harm to people and communities, as these systems can be used in armed conflict and cause injury or death. Although the article does not report a specific incident of harm occurring, the development and testing of AI-powered lethal drones constitute a plausible future risk of significant harm. Therefore, this event qualifies as an AI Hazard under the framework, as it describes the development and use of AI systems that could plausibly lead to harm but does not document an actual incident of harm yet.
Thumbnail Image

Кім Чен Ин наказав посилювати можливості БПЛА за допомогою ШІ | Еспресо

2025-09-19
espreso.tv
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI in military drones, which are AI systems with potential for significant harm. Since the article describes plans to enhance UAV capabilities with AI but does not mention any realized harm or incidents, this constitutes a plausible future risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard, as the development and enhancement of AI-enabled military drones could plausibly lead to harms such as injury, disruption, or violations of rights in the future.
Thumbnail Image

North Korea's Kim oversees drone test, orders AI development

2025-09-19
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development in the context of military attack drones capable of autonomous mission execution and improved lethality. The event involves the use and development of AI systems (attack drones with AI capabilities). While the test itself did not report direct harm, the nature of the AI system and its intended use in military operations create a credible risk of future harm to people and communities. Therefore, this qualifies as an AI Hazard rather than an AI Incident, as the harm is plausible but not yet realized.
Thumbnail Image

Кім Чен Ин посилює БПЛА -- Північна Корея робить ставку на ШІ-технології

2025-09-19
ФОКУС
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the development and deployment of AI-enabled armed drones by North Korea, which are intended for autonomous combat operations. This clearly involves AI systems in a military application with direct implications for harm to persons and communities through armed conflict. The AI's role in enhancing autonomous targeting and operation under jamming conditions increases the risk of harm. Although no specific incident of harm is reported as having occurred yet, the nature of the AI system and its intended use constitute a credible and significant risk of harm. Thus, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving injury, death, or broader harm in conflict.
Thumbnail Image

North Korea's Kim oversees drone test, orders AI development

2025-09-19
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in attack drones and their autonomous capabilities, which qualifies as an AI system. The event involves the use and development of AI systems in a military context with high potential for harm. Since no actual harm or incident is reported, but the potential for significant harm is credible and plausible, this qualifies as an AI Hazard rather than an AI Incident. The focus is on the development and testing phase with plausible future risks rather than realized harm.
Thumbnail Image

金正恩指导无人武器装备性能试验 强调发展人工智能技术

2025-09-19
China News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technologies in unmanned weapon systems, which are being tested for military applications. Although no incident of harm is reported, the nature of AI-enabled autonomous weapons inherently carries a credible risk of causing injury, disruption, or violations of rights in future conflicts. The event focuses on the development and enhancement of these AI systems, fitting the definition of an AI Hazard as it plausibly could lead to an AI Incident. There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information or unrelated, as the AI system's development and use in military unmanned systems is central to the report and implies plausible future harm.
Thumbnail Image

Kim Jong Un declares AI drone development a 'top priority' | News.az

2025-09-19
News.az
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development in unmanned weapons systems and drones, which are AI systems by definition. Although no direct harm or incident is reported, the context of military use, nuclear capabilities, and strategic threats implies a credible risk of future harm. The development and production of AI-powered drones in a hostile geopolitical context could plausibly lead to incidents involving injury, disruption, or violations of rights. Hence, this is an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI involvement is central to the event.
Thumbnail Image

North Korea's Kim oversees drone test, orders AI development

2025-09-19
KTBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development in attack drones capable of autonomous mission execution and operation under GPS jamming, indicating AI system involvement in military technology. While no direct harm is reported, the nature of the AI system's use in weaponized drones and the context of ongoing military conflict and alliances imply a credible risk of future harm to people and communities. The event does not describe an actual incident of harm but highlights a plausible future risk, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

North Korea's Kim oversees drone test, orders AI development

2025-09-19
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (attack drone) being tested and further AI development ordered. While no direct harm is reported, the use of AI in attack drones inherently carries plausible risks of causing injury, disruption, or other harms. The event is about the development and potential use of AI-enabled military technology with high potential for misuse and harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Corée du Nord : Kim Jong Un assiste à l'essai de drones d'attaque et veut "développer rapidement" l'IA | TF1 INFO

2025-09-19
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI-enabled attack drones, which are military AI systems capable of autonomous or semi-autonomous operations. The article does not report any actual harm or incident caused by these drones yet, but the deployment and enhancement of such AI military technology plausibly lead to significant harms, including injury, disruption of critical infrastructure, and violations of human rights. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

朝鲜大方展示新型无人机,金正恩很满意,还特别提到AI_腾讯新闻

2025-09-19
QQ新闻中心
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in North Korea's new unmanned weapon systems, including strategic and tactical drones. These AI-enabled drones are described as having military strategic value and combat effectiveness, implying autonomous or semi-autonomous operation. The development and deployment of AI in military drones pose credible risks of harm, including injury, violations of human rights, and disruption of security. However, no actual harm or incident caused by these AI systems is reported; the article focuses on their development, testing, and strategic importance. Therefore, the event fits the definition of an AI Hazard, as the AI systems could plausibly lead to an AI Incident in the future due to their military use and autonomous capabilities.
Thumbnail Image

El líder norcoreano supervisa pruebas de ataques de drones y ordena el desarrollo de la IA

2025-09-19
Yahoo!
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems in military drones capable of autonomous attacks, which are explicitly described as a priority for North Korea's armed forces modernization. Although no direct harm is reported in this article, the nature of AI-enabled autonomous weapons inherently carries a credible risk of causing injury, death, or broader harm in military conflicts. The article's focus on expanding AI capabilities in drones and their use in warfare contexts supports classification as an AI Hazard rather than an Incident, since harm is plausible but not yet realized or reported here.
Thumbnail Image

Kim Jong-un supervisa nuevos drones suicida y apuesta por IA

2025-09-19
Yahoo!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in drones used for military attack purposes, including suicide drones tested by North Korea. These drones are AI-enabled autonomous weapon systems, which by their nature pose a credible risk of causing injury, death, and broader security harms. Since the article does not report actual harm occurring but highlights ongoing development and testing, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a clear indication of plausible future harm from AI-enabled military technology.
Thumbnail Image

Kim Jong-un supervisiona teste de drones e demanda desenvolvimento de IA

2025-09-19
O Globo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in military drones capable of autonomous mission execution, which qualifies as AI system involvement. While no actual harm or incident is reported, the nature of the AI system (autonomous attack drones) and the context (military use, increased lethality, and mass production) create a credible risk of future harm, including injury, violations of human rights, and disruption of peace. Hence, this is an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the development and testing of AI-enabled drones with potential for harm, not on responses or updates to past incidents. It is not Unrelated because the event clearly involves AI systems and plausible harm.
Thumbnail Image

Líder de la RPDC inspecciona complejo de tecnología aeronáutica no tripulada

2025-09-19
新华网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and testing of unmanned aerial vehicles and attack drones, which almost certainly involve AI systems for navigation, targeting, and operational autonomy. The event concerns the use and development of these AI systems in a military context, which inherently carries risks of injury, disruption, and violation of human rights. Since the article reports on testing and plans to expand this technology but does not describe any actual harm or incident resulting from their use, the event fits the definition of an AI Hazard rather than an AI Incident. The plausible future harm includes potential injury, disruption, and violations arising from the deployment of AI-enabled autonomous weapons.
Thumbnail Image

RPDC : le dirigeant suprême inspecte le Complexe des technologies de l'aéronautique sans pilote

2025-09-19
Les nouvelles à travers la Chine et le monde
Why's our monitor labelling this an incident or hazard?
The drones described are likely AI systems or incorporate AI for autonomous or semi-autonomous operation, given their roles in surveillance and tactical attack. The development and testing of such AI-enabled military drones with combat capabilities represent a plausible risk of future harm, including injury, disruption, or violations of rights, due to their potential use as autonomous weapons. Although no specific harm is reported as having occurred yet, the event clearly indicates the advancement and planned expansion of AI-enabled military drone capabilities, which could plausibly lead to significant harm. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.
Thumbnail Image

North Korea's Kim Jong Un Oversees Drone Test, Orders AI Development

2025-09-19
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development in military drones and their autonomous capabilities, which qualifies as AI system involvement. The event stems from the use and development of AI systems in military technology. Although no direct harm or incident is reported, the described capabilities and military context imply a credible risk of future harm, such as increased lethality and autonomous attacks, which fits the definition of an AI Hazard. There is no indication of realized harm or incident, so it cannot be classified as an AI Incident. It is more than general AI news, so it is not Unrelated or Complementary Information.
Thumbnail Image

Corée du Nord : Kim Jong-un assiste à l'essai de drones d'attaque et appelle au développement de l'IA

2025-09-19
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of attack drones whose development and testing are overseen by a state actor known for military aggression. The use of AI in autonomous or semi-autonomous weapons systems is widely recognized as a significant hazard due to the potential for lethal harm, escalation of conflict, and violations of international law. Since the article describes a test and encouragement of AI development for military drones without reporting actual harm yet, it fits the definition of an AI Hazard rather than an AI Incident. The plausible future harm includes injury, disruption, and violations of human rights or international law.
Thumbnail Image

North Korea's Kim oversees test of tactical attack drones, pushes AI advancement

2025-09-19
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in armed unmanned aerial vehicles (drones) with combat applicability, indicating the presence of AI systems. Although no direct harm is reported, the development and mass production of AI-enabled tactical attack drones clearly pose a credible risk of future harm, such as injury or disruption in conflict scenarios. The event is about the development and testing phase with a focus on accelerating AI capabilities for military use, which fits the definition of an AI Hazard rather than an Incident or Complementary Information. There is no indication of realized harm yet, so it is not an AI Incident. It is not unrelated or merely complementary information because the focus is on the potential for harm through AI-enabled weapons development.
Thumbnail Image

North Korea unveils two new suicide drones with AI focus

2025-09-19
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being rapidly advanced in the development of North Korea's suicide drones, which have been tested against military targets. The drones' AI capabilities, such as target recognition or learning, imply autonomous or semi-autonomous operation. Given the military context and the potential for these drones to cause injury, death, or disruption, the event represents a plausible future harm scenario. Since no actual harm or incident is reported as having occurred yet, but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

North Korea tests new suicide drone weapons, some possibly AI-powered

2025-09-19
중앙일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and testing of suicide drones that are possibly AI-powered, capable of recognizing and striking targets based on visual profiles. These drones are actively tested and likely to be deployed, indicating realized use rather than hypothetical risk. The involvement of AI in autonomous lethal weapons systems directly relates to harm to people and communities, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a concrete instance of AI system use with direct implications for harm.
Thumbnail Image

Kim Jong-un supervisiona teste de drones e demanda mais IA - 19/09/2025 - Mundo - Folha

2025-09-19
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in military drones capable of autonomous attack missions, which are being tested and expanded under Kim Jong-un's supervision. While no direct harm from these AI systems is reported in this article, the context of their use in active conflict zones and the nature of autonomous lethal drones create a plausible risk of injury or death, qualifying this as an AI Hazard. The event does not describe a realized harm (incident) but highlights a credible future threat from AI-enabled military technology.
Thumbnail Image

Kim Jong Un supervisiona teste de drones e demanda desenvolvimento de IA

2025-09-19
UOL notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in tactical drones capable of autonomous mission execution, which fits the definition of an AI system. The focus is on the development and enhancement of these AI-enabled drones for military use, which could plausibly lead to harms such as injury, death, or broader conflict escalation. Since no actual harm or incident is reported yet, but the potential for harm is credible and significant, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Coreia do Norte realiza teste de drones de ataque e aposta em IA

2025-09-19
UOL notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military drone development, which are attack drones supervised by the North Korean leader. The presence of AI in autonomous or semi-autonomous weapon systems is reasonably inferred. Although no specific harm has yet occurred or been reported, the development and testing of AI-enabled attack drones plausibly could lead to harms such as injury, disruption of critical infrastructure, or violations of human rights. Therefore, this event qualifies as an AI Hazard due to the credible risk posed by AI-powered military drones.
Thumbnail Image

North Korea's copycat US drone takes flight as Kim Jong Un watches on

2025-09-19
Newsweek
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI as a core technology in North Korea's unmanned military equipment, including drones designed for reconnaissance and attack. The drones are clones of U.S. systems and are intended for military use, which inherently carries risks of injury, disruption, and violations of rights. While no direct harm or incident is reported, the development and potential export of such AI-enabled weapon systems plausibly could lead to significant harm. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kim Jong-Un supervisa pruebas de ataques de drones; ordena ampliar uso de la IA | El Universal

2025-09-19
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and planned expansion of AI in military drones capable of autonomous attacks, which are inherently dangerous systems. While no direct incident of harm caused by these AI systems is reported, the nature of the technology and its intended use in military conflict plausibly could lead to injury, death, or other serious harms. This fits the definition of an AI Hazard, as the development and deployment of such AI systems could plausibly lead to an AI Incident. There is no indication of a realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the development and military use of AI-enabled drones with potential for harm.
Thumbnail Image

金正恩指导无人武器装备性能试验 展示创新战斗效能

2025-09-19
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions unmanned aerial vehicles (drones) used for reconnaissance and attack, which are typically equipped with AI systems for autonomous or semi-autonomous operation. The testing and approval of plans to expand these capabilities indicate ongoing development and use of AI-enabled weapon systems. Given the military context and the potential for these AI systems to cause harm in conflict scenarios, this event qualifies as an AI Hazard because it plausibly could lead to harm through the deployment of autonomous weaponry. There is no indication that harm has yet occurred or that a malfunction has led to harm, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it concerns the development and use of AI systems with clear potential for harm.
Thumbnail Image

Corea del Norte: Kim Jong-un Un supervisa prueba de drones y ordena implementar IA en su arsenal militar

2025-09-19
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (AI-enabled autonomous military drones) that could plausibly lead to harm such as injury, disruption, or violations of human rights. The article does not describe a realized harm or incident but highlights the introduction and expansion of AI in military drones with autonomous capabilities, which is a credible risk for future AI-related harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

КНДР зробила ставку на ударні дрони з ШІ: стало відомо про новий наказ Кім Чен Ина. Фото

2025-09-19
OBOZREVATEL
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems in strike drones, which are weaponized autonomous or semi-autonomous systems. While the article does not describe any actual harm or incident resulting from these drones, the nature of AI-powered military drones inherently carries a credible risk of causing injury, death, or destruction if used in conflict. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to people or communities. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI-enabled military drone tests and plans for their enhancement, which is a credible future risk.
Thumbnail Image

КНДР вперше показала нові ударні дрони "Кумсон" розміром з пасажирський літак (фото)

2025-09-19
ТСН.ua
Why's our monitor labelling this an incident or hazard?
The event involves AI systems insofar as the drones' development includes artificial intelligence and operational capabilities, implying AI integration. The drones are military strike drones, which inherently carry a credible risk of causing harm to people, communities, or infrastructure if used. Since no harm has yet occurred or been reported, but the potential for harm is credible and plausible, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses, mitigation, or broader governance but on the demonstration and strategic development of AI-enabled military drones, fitting the definition of an AI Hazard.
Thumbnail Image

Avanço de IA para uso militar é prioridade absoluta para Coreia do Norte, diz Kim Jong-un

2025-09-19
VEJA
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems for military purposes, specifically in drones and unmanned vehicles, which are AI systems by definition. The article highlights the potential for these AI-enabled military tools to increase North Korea's military threat, which could plausibly lead to harms such as injury, disruption, or violations of rights. Since no actual harm or incident has been reported yet, but the risk is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses, updates, or broader ecosystem context, so it is not Complementary Information. It is clearly related to AI systems and their potential harms, so it is not Unrelated.
Thumbnail Image

North Korea's Kim Oversees Drone Test, Orders AI Development

2025-09-19
Channels Television
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development in military drones capable of autonomous mission execution and improved lethality. The drones have been tested successfully but no direct harm or incident caused by these AI systems is reported. The potential for these AI-enabled drones to cause injury, death, or broader harm in conflict zones is credible and significant, meeting the definition of an AI Hazard. The event is not a Complementary Information piece because it focuses on the development and testing of AI military technology with plausible future harm, not on responses or updates to past incidents. It is not an AI Incident because no realized harm is described. It is not Unrelated because AI systems are clearly involved.
Thumbnail Image

Kim Jong Un supervisiona teste de drones e demanda desenvolvimento de IA

2025-09-19
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in military drones capable of autonomous operations, which qualifies as AI system involvement. While no direct harm is reported, the nature of armed autonomous drones inherently carries a credible risk of causing injury, disruption, and human rights violations in future conflicts. The event is about the development and testing phase and the strategic prioritization of AI-enhanced drones, indicating plausible future harm rather than realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

金正恩指示 用人工智能提升无人机作战能力

2025-09-19
早报
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the introduction of AI to improve unmanned weapon systems' combat capabilities, which qualifies as an AI system development and intended use. Given the military context and the potential for autonomous weapons to cause injury, disruption, or other harms, this event constitutes an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The focus is on the plausible future risk from AI-enabled military drones.
Thumbnail Image

Kim Jong-un supervisa a una prueba de drones de ataque y ordena el desarrollo de IA para modernizar el arsenal norcoreano

2025-09-19
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology for military drones, which are autonomous or semi-autonomous attack systems. Such systems qualify as AI systems under the definition. The event involves the development and intended use of these AI systems for military purposes, which could plausibly lead to significant harms such as injury, disruption, or violations of rights. Since no actual harm is reported yet, but the risk is credible and significant, this event is best classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the development and deployment of AI-enabled military drones with potential for harm, not on responses or updates to past incidents.
Thumbnail Image

朝鲜大方展示新型无人机,金正恩很满意,还特别提到AI

2025-09-19
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as integrated into new unmanned military drones with reconnaissance and attack functions. The use of AI in autonomous or semi-autonomous weaponry inherently carries a credible risk of causing harm (injury, disruption, or violations of rights) if deployed in conflict or misuse scenarios. Although no actual harm or incident is reported, the development and public display of such AI-enabled weapons with offensive capabilities represent a plausible future risk of AI-related harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The article does not focus on responses, mitigation, or legal/governance actions, nor is it unrelated to AI systems.
Thumbnail Image

Kim Jong-un supervisa nuevos drones suicida y apuesta por la implementación de la IA

2025-09-19
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in military drones, including suicide drones, which are weaponized autonomous systems. The development and testing of such AI-enabled weapons systems inherently carry a credible risk of causing injury, death, or broader security harms. Since the article does not report actual harm occurring but focuses on the development and testing phase, this qualifies as an AI Hazard rather than an AI Incident. The AI system's involvement is in the development and intended use of autonomous lethal drones, which plausibly could lead to harm.
Thumbnail Image

Kim Jong Un supervisiona teste de drone de ataque e impulsiona uso de IA na Coreia do Norte | Exame

2025-09-19
Exame
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being integrated into North Korea's attack drones, which are used in military operations. The drones' autonomous capabilities and deployment in conflict zones indicate direct involvement of AI systems in causing harm (injury and death of soldiers, military conflict). The development and use of these AI-powered drones constitute a direct link to realized harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm includes injury to persons and potential broader security risks, meeting the definition of an AI Incident.
Thumbnail Image

Kim Jong-un supervisa nuevos drones suicidas y apuesta por la inteligencia artificial para reforzar el arsenal norcoreano

2025-09-19
Antena3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI in North Korea's military drones, including suicide drones that have been tested in exercises. These drones are AI systems as they perform autonomous or semi-autonomous tasks such as target impact. The event involves the use and development of AI systems with a clear potential to cause harm (injury, death, military conflict escalation). However, no actual harm or incident resulting from these AI systems is reported; only tests and preparations are described. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to significant harm in the future, but no harm has yet occurred.
Thumbnail Image

Kim Jong-un supervisa nuevos drones suicida y pide aplicar IA en el ejército

2025-09-19
PanAm Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being applied in military drones, including suicide drones, which are weaponized autonomous or semi-autonomous systems. The development and testing of such AI-enabled weapons systems inherently carry a credible risk of causing injury, death, or other harms if used in conflict or misused. Since no actual harm is reported yet, but the event clearly involves AI system development with plausible future harm, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a clear indication of potential future harm from AI-enabled military technology.
Thumbnail Image

Кім Чен Ин проконтролював випробування безпілотників і наказав посилити їх ШІ

2025-09-19
Лига Новости
Why's our monitor labelling this an incident or hazard?
The involvement of AI in military drones for combat and reconnaissance purposes implies a high potential for harm, including injury or harm to persons and disruption of critical infrastructure, given the nature of autonomous weapon systems. Although no specific harm has been reported yet, the development and enhancement of AI-powered military drones constitute a credible risk of future harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from AI-enabled autonomous weapons development and deployment.
Thumbnail Image

Kim Jong-un supervisa pruebas de drones e impulsa la IA en el Ejército norcoreano

2025-09-19
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military drones that simulate suicide attacks, which are lethal autonomous weapons. The AI system's use in these drones directly relates to potential harm to persons and military conflict escalation. The article describes actual tests and deployment of these AI-enabled drones, not just potential future risks. Therefore, this is an AI Incident as the AI system's use has directly led to or is part of ongoing harm risks in military conflict contexts.
Thumbnail Image

North Korean Leader Inspects Unmanned Weapons Performance Test

2025-09-19
The Diplomat Magazine
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems in unmanned military drones, which are AI systems by definition. The testing and deployment of such AI-enabled weapons could plausibly lead to harm, including injury, disruption, or violations of rights, given their military nature. However, since the article only describes performance tests and strategic intentions without any actual harm or incidents occurring, this qualifies as an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

Kim Jong Un supervisiona teste de drones e demanda desenvolvimento de IA

2025-09-19
Correio do povo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military drones capable of autonomous operation and attack, which inherently carries a credible risk of causing harm (injury, disruption, violations of rights) if deployed. Although no incident of harm is reported, the development and testing of such AI-enabled weapons systems represent a plausible future threat. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI involvement is clear and central to the event.
Thumbnail Image

朝鲜大方展示新型无人机,金正恩还特别提到AI

2025-09-19
杭州网
Why's our monitor labelling this an incident or hazard?
The article reports on the development and testing of AI-enabled unmanned military drones by North Korea, including tactical attack drones. While no specific harm or incident is reported as having occurred, the nature of these AI systems as autonomous weapons implies a credible risk of future harm, such as injury or violation of human rights. Therefore, this event fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to significant harm, but no direct harm has yet been reported.
Thumbnail Image

La nueva "prioridad máxima" de Kim Jong-un: desarrollar drones militares con inteligencia artificial - ElNacional.cat

2025-09-19
ElNacional.cat
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military drones, which are AI systems capable of autonomous operation and lethal action. While no direct harm or incident is reported, the development and potential deployment of such AI-enabled weapons systems plausibly could lead to significant harms, including injury, disruption, and violations of rights. Therefore, this situation fits the definition of an AI Hazard, as it describes a credible future risk stemming from the development and use of AI in military drones by North Korea.
Thumbnail Image

Coreia do Norte realiza teste de drones de ataque e aposta em IA

2025-09-19
Istoe dinheiro
Why's our monitor labelling this an incident or hazard?
The event involves the use and testing of AI systems integrated into attack drones that have demonstrated lethal capabilities by destroying targets. The AI system's use in autonomous or semi-autonomous military drones directly relates to harm to persons and communities through military conflict. The article reports actual tests with successful target destruction, indicating realized harm potential. The development and deployment of such AI-enabled weapons systems fall under the definition of an AI Incident because the AI's use has directly led to harm or the capability for harm in a military context. The article does not merely discuss potential future harm but reports on actual tests and operational use, confirming the incident classification.
Thumbnail Image

Кім Чен Ин випробував нові дрони та доручив оснастити їх штучним інтелектом

2025-09-19
5 канал
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies to upgrade tactical and strategic drones, which are military autonomous systems. The development and planned deployment of AI-equipped drones with strike and reconnaissance capabilities pose a credible risk of harm, including potential injury or harm to people, disruption of critical infrastructure, and broader security threats. Although no harm has yet occurred, the event plausibly leads to future AI incidents due to the militarization and autonomous capabilities of these drones. Therefore, this qualifies as an AI Hazard under the framework, as it involves the development and intended use of AI systems that could plausibly lead to significant harm.
Thumbnail Image

Kim oversees North Korea attack drone test, pushes AI advancement

2025-09-19
TRT World
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI advancement in unmanned armed drones tested for combat effectiveness. Although no incident of harm has occurred yet, the development and testing of AI-enabled attack drones inherently carry a credible risk of causing harm in future military operations. This aligns with the definition of an AI Hazard, as the event plausibly could lead to injury, disruption, or other harms through the use of AI in autonomous weapons. There is no indication of realized harm or incident, so it is not an AI Incident. The focus is on the development and testing phase, not on responses or complementary information, and the event is clearly related to AI systems, so it is not unrelated.
Thumbnail Image

Кім наказав посилити північнокорейські дрони штучним інтелектом

2025-09-19
ZN.UA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to enhance North Korean military drones, which are tactical and strategic UAVs. The development and deployment of AI-enabled weaponized drones pose a credible risk of future harm, including injury, disruption, or escalation of conflict. Since no actual harm or incident is described, but the potential for harm is clear and plausible, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a credible risk related to AI use in military systems.
Thumbnail Image

Kim Jong-un apuesta por la inteligencia artificial en nuevos drones

2025-09-19
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems integrated into military drones capable of autonomous attack missions, which can directly lead to harm including injury or death (harm to persons) and disruption of security. Although no specific incident of harm is reported, the article clearly indicates the plausible future harm from these AI-enabled drones, constituting an AI Hazard. The article does not describe an actual harm event yet, so it is not an AI Incident. It is more than general AI news or complementary information because it highlights a credible threat from AI military technology.
Thumbnail Image

North Korea's Kim Jong Un calls AI, drone development a 'top priority'

2025-09-19
Stars and Stripes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development linked to military drones, which are likely to involve AI systems for reconnaissance and attack purposes. While no direct harm or incident is reported, the context of military AI development and mass production of drones indicates a credible risk of future harm, such as injury or disruption from autonomous weapons. The event is about the development and prioritization of AI-enabled military technology, not about an actual incident or realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kim oversees drone test, orders AI development

2025-09-19
New Age | The Most Popular Outspoken English Daily in Bangladesh
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-enabled attack drones with autonomous mission execution capabilities. The development and use of such AI systems in military drones can plausibly lead to harms including injury or death (harm to persons), disruption of critical infrastructure, and broader harm to communities and security. Since the article does not report actual harm occurring from this test but emphasizes the potential and strategic intent to expand AI-driven drone capabilities, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's development and intended use could plausibly lead to significant harm in the future, meeting the criteria for an AI Hazard.
Thumbnail Image

Північна Корея представила нові ударні безпілотники Kumsong

2025-09-19
ZAXID.NET
Why's our monitor labelling this an incident or hazard?
The drones are AI systems or at least AI-enabled systems given their autonomous strike role. The event involves the development and testing of these AI-enabled military drones, which could plausibly lead to harm such as injury, disruption, or violations of rights if used in conflict. Since no harm has yet occurred or been reported, this constitutes an AI Hazard rather than an AI Incident. The mention of prioritizing AI development in military modernization further supports the plausible future risk of harm.
Thumbnail Image

Kim Jong Un supervisiona teste de drones e demanda desenvolvimento de IA

2025-09-19
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in military drones capable of autonomous attack missions, which are demonstrated in tests. The drones' autonomous capabilities and military use imply a credible risk of harm to people and security. Since no actual harm from AI malfunction or misuse is reported yet, but the potential for significant harm is clear and plausible, this event fits the definition of an AI Hazard rather than an AI Incident. The focus is on the development and potential use of AI in lethal autonomous weapons, which is a recognized AI Hazard due to the plausible future harm.
Thumbnail Image

North Korea accelerates AI drone push

2025-09-19
Defence Blog
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the integration of AI into drone control systems and combat operations, indicating the presence of AI systems. Although no direct harm has occurred or been reported, the development of AI-powered autonomous weapons systems is widely recognized as a significant potential hazard due to the plausible future risks of injury, escalation of conflict, and violations of international law. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future.
Thumbnail Image

Розміром з пасажирський літак: Північна Корея випробувала нові ударні БпЛА "Кумсон" (фото) | FaceNews.ua: новости Украины

2025-09-19
Новости FaceNews.ua
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into military drones capable of strike operations. Given the military context and the potential for these AI-enabled drones to cause harm through autonomous or semi-autonomous attacks, this constitutes a plausible risk of significant harm to people and communities. The article describes actual testing and deployment, indicating the AI system's use rather than just potential. This aligns with the definition of an AI Incident because the AI system's use in military strike drones directly relates to potential injury or harm to persons or groups, even if no specific harm is reported yet, the active testing and deployment in a conflict context implies realized or imminent harm potential. Therefore, this is best classified as an AI Incident due to the direct link between AI-enabled weapon systems and harm potential in an active conflict environment.
Thumbnail Image

Corée du Nord : Kim Jong Un mise sur les drones et l'intelligence artificielle

2025-09-19
Linfo.re
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (combat drones with AI capabilities) intended for military use, which inherently carry a high risk of causing injury or harm to people. The article reports a successful test and plans for mass production, indicating the AI system's role in enhancing military strike capabilities. While no direct harm is reported yet, the plausible future harm from such AI-enabled weapons is significant. Therefore, this event qualifies as an AI Hazard due to the credible risk of harm from AI-powered autonomous weapons development and deployment.
Thumbnail Image

Kim Jong Un declares AI military drone development a 'top priority'

2025-09-19
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and prioritization of AI in military drone development, which qualifies as an AI system involved in weaponry. While no direct harm is reported, the nature of the AI system's intended use in military drones and weapons with strategic capabilities plausibly leads to significant future harm, including injury, disruption, or violations of rights. Therefore, this event fits the definition of an AI Hazard, as it describes a credible risk of harm stemming from the development and deployment of AI-enabled military drones.
Thumbnail Image

朝鲜大方展示新型无人机,这次没打马赛克,金正恩很满..

2025-09-19
Baidu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI as a core technology in the modernization of North Korea's unmanned weapon systems, including strategic and tactical drones. These drones are described as having reconnaissance and attack capabilities, with some being suicide drones. The use of AI in autonomous or semi-autonomous military drones inherently carries risks of harm, including injury or death, disruption of security, and potential violations of international law. Since the article does not report any actual incident of harm but focuses on the development, testing, and production plans, the event fits the definition of an AI Hazard rather than an AI Incident. The AI system's development and intended use in military drones plausibly could lead to significant harm in the future.
Thumbnail Image

朝鲜大方展示新型无人机,金正恩很满意,还特别提到AI

2025-09-19
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in new unmanned military drones and their testing, with leadership emphasizing AI's role in enhancing combat capabilities. These AI systems are designed for reconnaissance and attack, which inherently carry risks of harm to people and communities. Although no actual harm or incident is reported, the development and deployment of AI-enabled autonomous or semi-autonomous weapons constitute a credible and plausible risk of future harm, fitting the definition of an AI Hazard. There is no indication of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information about AI governance or responses, nor is it unrelated news. Hence, the classification is AI Hazard.
Thumbnail Image

Kim Jong-un supervisiona testes de drones na Coreia do Norte, diz mídia | CNN Brasil

2025-09-19
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to enhance military drones, including attack and suicide drones, which are AI systems with potential for significant harm. Since no harm has yet occurred but the development and testing of such AI-enabled weapons plausibly could lead to injury, disruption, or violations of rights, this event fits the definition of an AI Hazard. There is no indication of realized harm or incidents, so it is not an AI Incident. It is not merely complementary information because the focus is on the development and testing of AI-enabled military drones with clear potential for harm, not on responses or ecosystem context. Therefore, the classification is AI Hazard.
Thumbnail Image

Kim Jong Un calls AI drone development "top priority

2025-09-20
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into unmanned drones and weapons, which are being rapidly developed and produced under state leadership. These AI systems are intended for military use, including suicide drones and surveillance vehicles, which inherently carry risks of causing injury, death, or other harms. No actual harm or incident is reported yet, but the strategic focus on AI-enabled autonomous weapons and their expansion plausibly leads to future harms. Hence, this is an AI Hazard rather than an AI Incident. The article also references related cyberattacks potentially linked to North Korea's weapons program, reinforcing the hazardous context but not indicating a realized AI Incident.
Thumbnail Image

Le dirigeant nord-coréen Kim Jong Un a assisté à l'essai de drones d'attaque et appelé au développement de l'IA

2025-09-19
Franceinfo
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems in military drones, which are weaponized autonomous or semi-autonomous systems. The testing of attack drones that destroy targets and the call to accelerate AI development for military drones imply a credible risk of future harm, including injury or harm to persons and disruption of critical infrastructure. Although no direct harm is reported yet, the event plausibly leads to AI incidents due to the potential use of AI-enabled weapons in conflict. Therefore, this qualifies as an AI Hazard under the framework, as it plausibly could lead to harm through AI-enabled military applications.
Thumbnail Image

North Korea's Kim Jong Un Oversees Drone Test

2025-09-19
Sputnik India
Why's our monitor labelling this an incident or hazard?
The drones are described as tactical attack drones, implying autonomous or AI-assisted capabilities. The event involves the use and development of AI systems in a military context, which could plausibly lead to harms such as injury, disruption, or violations of human rights if used in conflict. Since the article reports a test and demonstration without actual harm occurring yet, this fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Top DPRK leader inspects unmanned aeronautical technology complex

2025-09-19
english.news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions unmanned surveillance vehicles and tactical attack drones, which almost certainly incorporate AI systems for autonomous navigation, targeting, or decision-making. The testing and approval of these weapons indicate active use and development of AI-enabled military technology. While no direct harm is reported, the nature of these systems and their combat application imply a plausible risk of significant harm in the future. Therefore, this event qualifies as an AI Hazard due to the credible potential for AI-driven harm from autonomous weapons development and deployment.
Thumbnail Image

Kim Jong Un declares AI military drone development a 'top priority' | Today Headline

2025-09-19
Today Headline
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military drone development and unmanned weapons systems, which are AI systems by definition. The event stems from the use and development of AI technology for military purposes. Although no direct harm is reported yet, the deployment of AI-enabled autonomous or semi-autonomous weapons systems by North Korea plausibly could lead to harms such as injury, disruption of critical infrastructure, or violations of human rights. The article's focus on prioritizing AI for military modernization and drone production indicates a credible risk of future AI-related harm. Hence, this is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

金正恩视察无人航空技术联合体,指导无人武器装备性能试验:对试验结果表示满意_手机网易网

2025-09-19
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the testing and development of unmanned aerial weapon systems, which are highly likely to incorporate AI for autonomous operation and targeting. While no actual harm is reported, the military use of such AI-enabled drones poses a credible risk of injury, harm, or disruption if deployed. The event focuses on the development and testing phase, indicating plausible future harm rather than realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the testing and approval of AI-enabled weapon systems with potential for harm, not on responses or governance. It is not unrelated because AI systems are reasonably inferred to be involved in these autonomous weapon systems.
Thumbnail Image

Kim Jong Un oversees North Korean drone test, orders AI military development

2025-09-19
thesun.my
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled attack drones successfully conducting target destruction, indicating AI system use in military operations. The autonomous mission execution and enhanced tactical flexibility imply AI systems are integral to the drones' operation. While no direct harm is reported, the nature of the AI system (attack drones) and the military context imply a credible risk of injury, death, or disruption, fulfilling the criteria for an AI Hazard. The event does not describe an actual incident of harm caused by AI but highlights the potential for significant future harm from AI military applications, thus fitting the AI Hazard classification.
Thumbnail Image

Кім Чен Ин подивився на випробування безпілотників у КНДР і наказав посилити їх за допомогою ШІ | УНН

2025-09-19
unn.ua
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to enhance military drones, including strike drones, which are weaponized autonomous or semi-autonomous systems. The development and deployment of such AI-enabled weapons systems inherently carry plausible risks of causing injury, disruption, and other harms. Since no actual incident of harm is reported, but the event involves the development and planned enhancement of AI military drones, it fits the definition of an AI Hazard rather than an AI Incident. The presence of AI in these drones and the context of their use for military purposes supports classification as an AI Hazard due to plausible future harm.
Thumbnail Image

Не показуйте Путіну: КНДР представила нові безпілотники розміром з пасажирський літак

2025-09-19
Комментарии Украина
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and testing of unmanned aerial vehicles (drones) equipped with AI technologies for military use by North Korea. These drones are described as strategic and tactical weapons with combat effectiveness, implying autonomous or AI-assisted operation. The involvement of AI in weapon systems with offensive capabilities presents a credible risk of future harm, including injury, disruption, or violations of human rights. Since no actual harm or incident is reported, but the potential for harm is clear and plausible, the event fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses, updates, or general AI news, so it is not Complementary Information. It is directly related to AI systems and their potential for harm, so it is not Unrelated.
Thumbnail Image

Alerta por el "halcón global", el arma más letal de todas que hace temblar a las potencias mundiales: combina IA y precisión

2025-09-20
El Cronista
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as integrated into military drones capable of autonomous lethal operations, including target identification and attack without human control. This use of AI in autonomous weapons systems directly implicates potential harm to persons and communities (harm category a and d) through military conflict or escalation. Although no specific incident of harm is reported yet, the article highlights the credible and significant risk posed by these AI-enabled weapons, fitting the definition of an AI Hazard. The development and deployment of such autonomous lethal AI systems is a recognized AI Hazard due to the plausible future harm they could cause.
Thumbnail Image

Corea del Norte: Kim Jong-un supervisa nuevos drones suicida y apuesta por el uso de la IA

2025-09-20
Panamericana Televisión
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into military drones, which are autonomous or semi-autonomous systems capable of lethal action. The use of AI in such weapons systems inherently carries a credible risk of causing injury or harm to people and communities. Since the article reports on testing and development without describing any realized harm, it does not meet the threshold for an AI Incident but clearly constitutes an AI Hazard due to the plausible future harm from AI-enabled autonomous weapons. Therefore, the classification is AI Hazard.
Thumbnail Image

North Korea's Kim oversees drone test, orders AI development | THE DAILY TRIBUNE | KINGDOM OF BAHRAIN

2025-09-20
DT News
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems in military drones capable of autonomous mission execution, which can plausibly lead to significant harms such as injury, disruption of critical infrastructure, and violations of human rights. Although the article does not describe a realized harm, the nature of the AI system's intended use in attack drones and the expressed intent to expand AI capabilities for military purposes constitute a credible future risk. Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

North Korean leader oversees test of AI-powered attack drones

2025-09-20
Intellinews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in attack drones, which are military AI systems capable of autonomous or semi-autonomous offensive operations. While no direct harm has been reported from their use so far, the development and testing of such AI-powered weapons systems plausibly could lead to harm including injury, disruption, or violations of human rights and international law. Therefore, this event fits the definition of an AI Hazard, as it describes circumstances where AI system development and use could plausibly lead to an AI Incident in the future. There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information because the focus is on the development and testing of AI attack drones with potential for harm, not on responses or ecosystem context. It is not unrelated because AI systems are clearly involved and the event concerns potential harm.
Thumbnail Image

Alerta por el "Falc global

2025-09-20
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Kumsong drone) with autonomous capabilities that could directly lead to harm through military conflict or escalation, including injury or harm to people and disruption of international security. Although no specific incident of harm has yet occurred, the article highlights a credible and plausible risk of future harm due to the deployment of AI-enabled autonomous weapons. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving significant harm.
Thumbnail Image

金正恩視導「金星」無人攻擊機 滿意測試成果

2025-09-19
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development to enhance unmanned attack drones' operational capabilities, including autonomous functions under GPS or communication interference. While no specific harm or attack incident is reported, the deployment and enhancement of AI-powered military drones inherently carry a credible risk of causing injury, harm to communities, or escalation of conflict. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm, but no actual harm event is described in the article.
Thumbnail Image

北韓官媒:「金星」無人機測試 金正恩視察很滿意 - Rti央廣

2025-09-19
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (AI-enhanced drones) with military applications, which are highly likely to cause harm if used. Since no harm has yet occurred but the development and testing of AI-enabled weaponized drones is ongoing, this constitutes an AI Hazard. There is no indication of realized harm or incident in the article, so it is not an AI Incident. The article is not merely complementary information because it focuses on the testing and enhancement of AI military drones, which plausibly could lead to harm.
Thumbnail Image

金正恩視察無人武器測試 「金星」攻擊型無人機現蹤 - 國際 - 自由時報電子報

2025-09-19
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to enhance the operational capabilities of unmanned attack drones, which are AI systems by definition. The event involves the development and testing of these AI-enabled weapons, with no reported actual harm or incident occurring yet. The potential for these AI weapons to cause significant harm in future conflicts is credible and plausible, meeting the criteria for an AI Hazard. There is no indication of realized harm or incident, so it cannot be classified as an AI Incident. It is not merely complementary information because the focus is on the development and testing of AI-enabled weapons with potential for harm, not on responses or broader ecosystem context. Hence, the classification is AI Hazard.
Thumbnail Image

金正恩視導無人攻擊機測試 下令發展人工智慧 | 國際焦點 | 國際 | 經濟日報

2025-09-19
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development for unmanned attack drones that have demonstrated operational capability to destroy targets, indicating AI system use in military autonomous weapons. While no actual harm event is reported, the nature of the AI system (autonomous attack drones) inherently carries a credible risk of causing injury or death and other harms. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm, including injury and violations of rights, even if no harm has yet occurred or been reported.
Thumbnail Image

山寨國+1!金正恩視導無人機 戰略偵察機與首次公開名稱「金星」亮相 | 國際 | Newtalk新聞

2025-09-19
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems integrated into unmanned weapon systems (drones) with military applications. Although no direct harm is reported as having occurred yet, the deployment and expansion of AI-powered autonomous or semi-autonomous weapons plausibly pose significant risks of harm, including injury, disruption, or violations of rights, given the nature of these systems. Therefore, this event constitutes an AI Hazard due to the credible potential for future harm stemming from the AI-enabled military drones' development and deployment.
Thumbnail Image

金正恩視導無人攻擊機測試 下令發展人工智慧 | 國際 | 中央社 CNA

2025-09-19
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (AI-enabled unmanned attack drones) in a military context. The AI system's development and use are directly linked to enhancing combat capabilities, which plausibly could lead to harm such as injury, disruption, or violations of rights due to autonomous weaponry. Although no specific harm has yet been reported, the article clearly indicates a credible risk of future harm from these AI-enabled weapons. Therefore, this qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and plausible but not yet realized.
Thumbnail Image

金正恩視察北韓無人航空技術聯合體

2025-09-19
news.rthk.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions unmanned aerial weapon systems, including reconnaissance and attack drones, which reasonably imply the use of AI systems for autonomous or semi-autonomous functions. The event concerns the development and testing of these AI-enabled military drones, which could plausibly lead to harm such as injury or conflict escalation. Since no actual harm or incident is reported, but the potential for harm is credible, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

北韓無人機裝備測試 金正恩多次親自視察

2025-09-19
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being developed to enhance the operational capabilities of military drones, which are AI systems by definition. The event involves the use and development of AI systems in a military context, which inherently carries risks of harm such as injury, disruption, or violations of human rights. However, the article does not report any actual harm or incident caused by these AI systems yet, only their testing and development. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

金正恩視察朝鮮無人航空技術聯合體

2025-09-19
香港文匯網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and testing of unmanned aerial vehicles, including strategic reconnaissance and tactical attack drones, which are typically AI-enabled systems for autonomous navigation, targeting, and operation. The inspection and approval of plans to expand and strengthen this unmanned aviation technology complex indicate ongoing development of AI-enabled military systems. Given the nature of these systems as weaponized drones, their development and deployment pose a plausible risk of harm, including injury, disruption, or violations of rights, even if no specific harm is reported yet. Therefore, this event constitutes an AI Hazard due to the credible potential for future harm from the development and enhancement of AI-enabled unmanned weapon systems.
Thumbnail Image

صحيفة عمون : كيم جونغ أون يختبر مسيرات هجومية بالذكاء الاصطناعي

2025-09-19
وكاله عمون الاخباريه
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in offensive and suicide drones, which are autonomous weapon systems. The development and testing of such AI-enabled military drones pose a credible risk of harm, including injury or death, disruption, and violations of human rights. Although no specific harm is reported as having occurred yet, the event plausibly leads to significant harm due to the nature of AI-powered autonomous weapons. Therefore, this qualifies as an AI Hazard under the framework.
Thumbnail Image

كيم جونغ أون يختبر مسيرات هجومية بالذكاء الاصطناعي

2025-09-19
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems in offensive military drones, which are AI systems by definition due to their autonomous or semi-autonomous operational capabilities. The article does not report any actual harm occurring yet but indicates ongoing testing and plans to enhance these AI-enabled weapons. Given the offensive nature and potential for harm to people, infrastructure, and communities, this situation constitutes an AI Hazard as it plausibly could lead to AI Incidents in the future.
Thumbnail Image

اخبارك نت | كيم جونغ أون يختبر مسيرات هجومية بالذكاء الاصطناعي

2025-09-20
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems in military drones with offensive capabilities, including autonomous or semi-autonomous functions. The deployment and enhancement of such AI-enabled weapon systems plausibly lead to significant harms such as injury, loss of life, or broader conflict escalation. Although no specific harm is reported as having occurred yet, the nature of these AI systems and their intended use in offensive military operations constitute a credible risk of future harm. Therefore, this event qualifies as an AI Hazard under the framework, as it plausibly could lead to an AI Incident involving injury or harm to people or communities.
Thumbnail Image

زعيم كوريا الشمالية يشرف على اختبار مُسيرات

2025-09-19
Alrai-media
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI systems integrated into military drones, which are autonomous or semi-autonomous weapons. The use of AI in such drones poses a plausible risk of harm (injury, disruption, or violations of human rights) if these systems are deployed or used in conflict. Since the article does not report any actual harm occurring yet but highlights plans to enhance AI capabilities in weaponized drones, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

زعيم كوريا الشمالية يشرف على اختبار مسيرات

2025-09-19
LBCI Lebanon
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into suicide drones and tactical and strategic reconnaissance drones. These AI systems are being developed and tested for military use, which inherently carries risks of injury, harm, and disruption. Although no specific harm is reported as having occurred yet, the development and testing of AI-enabled autonomous weapon systems constitute a credible and plausible risk of future harm, qualifying this event as an AI Hazard under the framework. There is no indication that harm has already occurred, so it is not an AI Incident. The focus is on the development and enhancement of AI capabilities in drones, not on a response or update to a past incident, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

كيم جونغ أون يختبر مسيرات هجومية بالذكاء الاصطناعي

2025-09-19
albiladpress.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in offensive and suicide drones, which are weaponized systems capable of causing injury, harm to people, and disruption. The development and testing of such AI-enabled military drones constitute a credible risk of harm, qualifying as an AI Hazard. Since the article does not report any actual harm or incident caused by these drones yet, but focuses on their testing and enhancement, it fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

كيم جونغ أون يتابع تجارب مسيرات بالذكاء الاصطناعي

2025-09-19
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in military drones, including autonomous or semi-autonomous capabilities. The development and testing of AI-enabled armed drones pose a credible risk of future harm, such as injury, disruption, or violations of rights, due to their potential use as autonomous weapons. Although no harm is reported as having occurred yet, the event plausibly leads to an AI Hazard because the AI system's development and intended use could lead to significant harm in the future.
Thumbnail Image

Путин учи Ким Чен Ун на модерна война, той се усмихва от задоволство (СНИМКИ)

2025-09-19
Actualno.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled attacking drones being tested and used by North Korea, with successful destruction of targets, indicating direct use of AI systems in military operations. The harm includes military casualties and destruction, which falls under harm to persons and communities. The AI system's use in autonomous drones is central to the event, and the harm is realized, not just potential. Hence, this qualifies as an AI Incident.
Thumbnail Image

Ким Чен Ун наблюдава тестове на дронове

2025-09-19
Труд
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and testing of drones integrated with AI technologies for military purposes, including strike and reconnaissance roles. Although no specific harm is reported as having occurred yet, the nature of these AI-enabled drones as autonomous or semi-autonomous weapons systems plausibly leads to significant harm, such as injury or disruption. Therefore, this event qualifies as an AI Hazard due to the credible risk posed by the development and potential deployment of AI-powered military drones.
Thumbnail Image

КНДР укрепва ударните дронове с технологии с изкуствен интелект

2025-09-19
Информационна Агенция "Фокус"
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems integrated into military strike drones, which are weapons capable of autonomous or semi-autonomous operation. The deployment and enhancement of AI in such weapon systems pose a credible risk of harm, including injury or harm to persons and disruption of security. Although no specific harm is reported as having occurred yet, the development and enhancement of AI-powered strike drones constitute a plausible future risk of significant harm. Therefore, this event qualifies as an AI Hazard under the framework, as it describes the development and intended use of AI-enabled autonomous weapons with high potential for misuse and harm.
Thumbnail Image

Ким Чен Ун наблюдава демонстрация на дронове, поиска подобрение с изкуствен интелект (+СНИМКИ)

2025-09-19
Bgonair
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in military drones intended for attack and reconnaissance purposes. These AI-enabled drones are weapons systems that could plausibly lead to harm such as injury or violations of human rights if used in warfare. Since the article does not report any actual harm or incident but focuses on the development and enhancement of these AI systems, it fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information, as it concerns the plausible future risk posed by AI-enabled autonomous or semi-autonomous weapons.
Thumbnail Image

Защо Ким Чен Ун е толкова щастлив: Северна Корея тества бойни дронове. Трябва да го видите

2025-09-19
Поглед Инфо
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI systems integrated into military drones, which are weaponized autonomous or semi-autonomous systems. The article explicitly mentions AI systems for unmanned aerial vehicles as a priority. While no direct harm has been reported yet, the nature of these AI-enabled weapons and their potential deployment pose a plausible risk of causing injury, harm to communities, or disruption of security, fitting the definition of an AI Hazard. There is no indication of realized harm at this stage, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it highlights a credible future risk from AI-enabled military technology.
Thumbnail Image

Kim Dzong Un nadzorował specjalne testy wojskowe. Dyktator ma jeden cel

2025-09-19
Onet Wiadomości
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (AI-enabled tactical attack drones) in a military context. The article highlights the leader's focus on AI integration into drones as a key military asset, implying autonomous or AI-assisted operations. While no direct harm is reported, the deployment of such AI military systems in active conflict zones and their potential to cause injury or death constitutes a credible risk of harm. Therefore, this situation fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident involving injury, harm, or disruption related to armed conflict.
Thumbnail Image

Korea Północna pokazała broń "doskonałej skuteczności". Seul zaniepokojony

2025-09-19
Interia.pl - Biznes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence in North Korea's tactical attack drones, which are tested and described as highly effective weapons. The AI system's development and intended use in military drones capable of autonomous or semi-autonomous operations represent a credible risk of causing harm, including injury or death, disruption of critical infrastructure, and violations of human rights. Although no direct harm is reported yet, the nature of the AI system and its military application plausibly lead to significant harm, fitting the definition of an AI Hazard. There is no indication of an actual incident or realized harm at this stage, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI-enabled weapon system and its implications.
Thumbnail Image

Śmiercionośne, tanie, wspierane AI. Korea Płn. testuje nowe drony

2025-09-19
rmf24.pl
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (AI-supported military drones) that are directly linked to lethal military applications. The drones' deployment and testing under the North Korean regime, combined with their use in active conflict zones, directly implicate AI in causing or enabling harm to persons and communities. The article describes realized use and testing, not just potential future risk, thus qualifying as an AI Incident rather than a mere hazard. The AI system's role is pivotal in enabling these drones' autonomous or semi-autonomous operational capabilities, which are central to the harm potential.
Thumbnail Image

Korea Płn.: KCNA: Kim Dzong Un nadzorował test nowych taktycznych dronów szturmowych

2025-09-19
wnp.pl
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into military drones, which are described as having advanced autonomous capabilities. The article indicates that these drones have been tested successfully and are prioritized for mass production, implying imminent deployment. Given the military context and the potential for these AI-enabled drones to cause injury or death in conflict zones, this constitutes a plausible and significant harm directly linked to AI system use. Therefore, this situation qualifies as an AI Hazard because the harm is plausible and credible but not explicitly reported as having occurred yet in this article.
Thumbnail Image

Kim Dzong Un nadzorował test nowych taktycznych dronów szturmowych

2025-09-19
Nasz Dziennik
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI in military drones by North Korea, a state actor with a history of conflict and military aggression. The drones are described as having high combat effectiveness and the AI is intended to enable operation even under GPS or communication jamming, indicating autonomous or semi-autonomous capabilities. This clearly involves an AI system. While no direct harm is reported in the article, the potential for these AI-enabled drones to cause injury, disrupt critical infrastructure, or violate human rights in future military operations is credible and significant. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the AI system's use in military assault drones.
Thumbnail Image

金正恩氏、無人機の生産拡大に意欲 性能試験を視察

2025-09-19
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of drones equipped with AI technology for military purposes. Although no direct harm is reported yet, the production and deployment of AI-enabled tactical attack drones pose a credible risk of causing injury, harm to communities, or disruption through their use in conflict. Therefore, this situation constitutes an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident involving harm.
Thumbnail Image

北朝鮮 キム総書記 国内開発無人機の性能試験を視察 | NHK

2025-09-19
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of unmanned drones with AI technology for military purposes. The leader's directive to improve AI capabilities and production suggests ongoing development and use of AI systems with potential for harm. Although no direct harm is reported, the military nature and potential deployment of these AI-enabled drones plausibly could lead to incidents involving injury, disruption, or rights violations. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

金正恩氏が戦術無人攻撃機の性能試験視察 AI導入を指示 | 聯合ニュース

2025-09-19
聯合ニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integration into tactical unmanned attack drones, which are AI systems by definition due to their autonomous or semi-autonomous operational nature. The event concerns the development and intended use of these AI-enabled weapons, which could plausibly lead to harms such as injury, violation of human rights, and damage to property or communities. Since no actual harm is reported yet, but the credible risk is clear and significant, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the inspection and directive to integrate AI, not on responses or updates to past incidents. It is not Unrelated because the AI system involvement and potential harm are central to the report.
Thumbnail Image

北韓の金委員長 無人機の性能試験を視察 AI技術向上を指示

2025-09-19
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being integrated and improved in North Korea's attack and reconnaissance drones, which are weaponized unmanned systems. The development and testing of such AI-enabled military drones inherently carry a credible risk of causing harm through their use in conflict or attacks. Since the article does not report any actual harm or incident resulting from these AI systems yet, but highlights the ongoing development and enhancement with clear military offensive applications, it fits the definition of an AI Hazard. The event involves AI system development and use with plausible future harm, but no direct or indirect harm has been reported as realized at this time.
Thumbnail Image

金正恩氏、無人攻撃用ドローンの試験を視察...目標物にぶつかり爆発「大きな満足」

2025-09-19
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology in tactical unmanned attack drones that are tested to hit and explode on targets. While no actual harm to people or property beyond the test is reported, the development and use of AI in autonomous weapons inherently carry a credible risk of causing injury, death, or destruction in future use. The event is not a report of an AI Incident (no realized harm to persons or communities yet), but the AI system's development and use plausibly could lead to significant harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

金正恩氏、無人機の生産拡大指示 AI技術「高度化」 -- 北朝鮮:時事ドットコム

2025-09-19
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being integrated into drones and the directive to enhance this AI capability and increase drone production. Given the military context and the reference to modern warfare and the use of drones in conflicts like the Russia-Ukraine war, the development and expansion of AI-enabled drones pose a credible risk of future harm. Although no specific harm has yet occurred or been reported, the nature of the AI system's intended use in military drones implies a plausible future risk of AI incidents such as injury, disruption, or violations of human rights. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.
Thumbnail Image

北朝鮮の金正恩氏、無人機試験を視察 AI活用を指示:時事ドットコム

2025-09-19
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into attack drones, which are military autonomous or semi-autonomous systems capable of lethal action. The article describes actual testing and deployment of these AI-enabled drones, with explicit instructions to accelerate AI development and production. This directly links AI system use to potential and actual harm in military conflict, fulfilling the criteria for an AI Incident due to harm to people and communities. The article does not merely warn of future harm but reports ongoing use and development, so it is not merely a hazard or complementary information.
Thumbnail Image

金正恩総書記がAI無人機の生産拡大を指示 性能試験を視察「現代戦で利用範囲広がる」

2025-09-19
産経ニュース
Why's our monitor labelling this an incident or hazard?
The article describes the development and production of AI-enabled military drones with attack capabilities, which are inherently hazardous due to their potential use in armed conflict and harm to people and communities. Although no specific incident of harm is reported, the expansion and enhancement of AI-powered attack drones plausibly could lead to AI incidents involving injury, violation of rights, or harm to communities. Therefore, this event constitutes an AI Hazard as it plausibly leads to significant harm through the use of AI in autonomous or semi-autonomous weapons systems.
Thumbnail Image

北朝鮮の金総書記、無人機向けAI開発が軍強化の最優先課題と強調

2025-09-19
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development for military drones, which are AI systems used for autonomous or semi-autonomous operations. Although no specific harm has yet occurred or been reported, the development and prioritization of AI-enabled military drones with combat capabilities pose a credible risk of future harm, including injury, disruption, or violations of human rights. Therefore, this event constitutes an AI Hazard due to the plausible future harm from the militarization of AI-powered unmanned systems.
Thumbnail Image

金正恩氏が無人機試験視察、AIによる強化を命令=朝鮮中央通信

2025-09-19
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into military drones, which are weaponized unmanned aerial vehicles. The involvement of AI in enhancing these drones' capabilities directly relates to their potential for causing harm. While the article does not report an actual incident of harm, the nature of AI-enabled autonomous or semi-autonomous weapon systems inherently carries a plausible risk of causing injury, disruption, or violations of rights. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm stemming from the AI-enhanced military drones.
Thumbnail Image

北朝鮮の金正恩氏、無人機試験を視察 AI活用を指示

2025-09-19
afpbb.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as AI is directed to be further utilized in attack drones, which are autonomous or semi-autonomous weapons platforms. The development and deployment of such AI-enabled attack drones inherently carry a plausible risk of causing harm to people and communities, fulfilling the criteria for an AI Hazard. Since no actual harm or incident is reported yet, it does not qualify as an AI Incident. The event is not merely general AI news or a product launch but highlights a credible military AI application with potential for significant harm, so it is not Complementary Information or Unrelated.
Thumbnail Image

AI無人機の生産能力拡大を指示 金正恩氏「現代戦で利用」

2025-09-19
神戸新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into drones intended for military use, including attack drones. The development and production expansion of such AI-enabled drones for warfare is a clear AI system involvement. While no direct harm is reported yet, the intended use in modern warfare and the nature of autonomous or AI-assisted weapon systems plausibly pose a significant risk of harm, qualifying this as an AI Hazard. The event does not describe an actual incident causing harm but highlights a credible future risk from the AI system's development and intended use in conflict.
Thumbnail Image

AI無人機の生産能力拡大を指示 金正恩氏「現代戦で利用」

2025-09-19
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (AI-enabled unmanned drones) for military purposes, specifically modern warfare. Although no direct harm has yet occurred, the expansion and enhancement of AI-powered military drones plausibly pose significant risks of harm, including injury, disruption, and violations of rights, if deployed in conflict. Therefore, this constitutes an AI Hazard due to the credible potential for future harm stemming from the AI system's development and intended use in warfare.
Thumbnail Image

AI無人機の生産能力拡大を指示 金正恩氏「現代戦で利用」

2025-09-19
琉球新報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-enabled unmanned drones) under development and production for military use, which could plausibly lead to harms such as injury or disruption in conflict scenarios. Since no actual harm or incident has occurred or been reported, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses, legal proceedings, or updates to past incidents, so it is not Complementary Information. It is directly related to AI systems and their potential impact, so it is not Unrelated.
Thumbnail Image

AI無人機の生産能力拡大を指示|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2025-09-19
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The event involves the development and production of AI-equipped unmanned drones, which are AI systems. The instruction to enhance AI capabilities and increase production suggests a credible risk of future harm, especially given the military context and the ongoing conflict in Ukraine where such drones have been used. Although no harm has yet occurred as per the article, the plausible future use of these AI systems in conflict zones constitutes an AI Hazard.
Thumbnail Image

ロシア自爆ドローンにそっくり!?「北朝鮮オリジナル無人機たち」金総書記が"生産能力の拡充"を命令 | 乗りものニュース

2025-09-20
乗りものニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in the development of unmanned combat drones and the directive to rapidly expand this capability. These drones are described as attack and reconnaissance UAVs, including suicide drones similar to those used by Russia, implying autonomous or semi-autonomous AI systems. Although no actual incident or harm is reported, the production and enhancement of AI-enabled military drones inherently carry a plausible risk of future harm such as injury, disruption, or violations of human rights. Hence, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

بالذكاء الاصطناعى.. زعيم كوريا الشمالية يشرف على اختبار مسيرات هجومية - اليوم السابع

2025-09-19
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (AI-enhanced offensive drones) in a military context. Although no specific incident of harm is reported as having occurred yet, the deployment and enhancement of AI-powered offensive drones clearly could plausibly lead to injury, loss of life, or broader harm to communities through military conflict. The article describes ongoing development and testing, indicating a credible risk of future harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident, since harm is plausible but not yet realized or reported.
Thumbnail Image

زعيم كوريا الشمالية يشرف على اختبار طائرة هجومية مسيرة

2025-09-19
قناة العربية
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI-enabled armed drones, which are AI systems by definition due to their autonomous or semi-autonomous operational capabilities. The article does not report any actual harm caused by these drones yet, but the use of AI in offensive military drones inherently carries a plausible risk of injury, death, and broader security harms. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The mention of accelerating AI research and expanding drone production further supports the potential for future harm.
Thumbnail Image

كوريا الشمالية تختبر مسيّرة هجومية... وكيم مهتم بالذكاء الاصطناعي

2025-09-19
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-enabled offensive drones) and their development and testing under the direction of North Korea's leadership. The drones' combat use and AI integration imply a credible risk of future harm (injury, death, military conflict escalation). Since no actual harm or incident is reported, but the potential for harm is clear and credible, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with military applications and associated risks.
Thumbnail Image

كيم جونغ أون يأمر بتطوير مسيّرات هجومية بالذكاء الاصطناعي

2025-09-19
24.ae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in offensive drones, which are weaponized autonomous or semi-autonomous systems. The development and testing of such AI-enabled military drones pose a plausible risk of causing injury, death, or broader harm through their use in conflict. Since no actual harm or incident is reported yet, but the event clearly indicates a credible future risk, this qualifies as an AI Hazard under the framework.
Thumbnail Image

زعيم كوريا الشمالية يشرف على اختبار مسيّرات

2025-09-20
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in the development of armed drones, which are military autonomous systems capable of lethal action. While the test itself did not report any harm, the deployment and further development of AI-powered armed drones inherently carry significant risks of harm, including injury or death in conflict zones and broader geopolitical instability. This fits the definition of an AI Hazard, as the event plausibly could lead to AI Incidents in the future. There is no indication that harm has already occurred from this specific test, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it highlights a credible future risk from AI-enabled military technology.
Thumbnail Image

كوريا الشمالية: كيم يشرف على تجارب مسيّرات ويدعو لتعزيزها بالذكاء الاصطناعي

2025-09-19
شبكة الميادين
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technologies to enhance unmanned military drones, which are AI systems by definition due to their autonomous or semi-autonomous capabilities. The event involves the development and use of AI systems in weapons, which have a high potential for causing harm. Since no actual harm or incident is reported, but the development and testing of AI-enabled military drones with combat capabilities is ongoing, this constitutes an AI Hazard. The plausible future harm includes injury, disruption, or other significant harms from the use of AI-powered weapons. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

كوريا الشمالية | كيم جونغ أون يشرف على اختبار طائرة هجومية مسيّرة - قناة المنار

2025-09-19
موقع قناة المنار - لبنان
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems in military drones, which are weaponized autonomous or semi-autonomous systems. The use of AI in armed drones directly relates to potential harm through military conflict or misuse, constituting a plausible risk of significant harm. Although no specific harm has yet occurred or been reported, the development and prioritization of AI-enabled armed drones represent a credible and serious AI Hazard due to their potential to cause injury, disruption, or violations of rights in future military actions.
Thumbnail Image

الوكالة الوطنية للإعلام - زعيم كوريا الشمالية أشرف على اختبار طائرة هجومية مسيّرة

2025-09-19
National News Agency - Lebanon (NNA) / Al Wikaala al Wataniyya lil Anbaa'
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in armed drones, which are autonomous or semi-autonomous systems capable of lethal action. The development and deployment of AI-enabled offensive drones pose a credible risk of harm, including injury or death, disruption of security, and violations of international law. Although no specific harm has yet occurred from this particular test, the event indicates ongoing development and prioritization of AI in military drones, which could plausibly lead to significant harm in the future. Therefore, this qualifies as an AI Hazard due to the plausible future harm from AI-enabled autonomous weapons development and deployment.
Thumbnail Image

زعيم كوريا الشمالية يشرف على اختبار مسيرات هجومية جديدة

2025-09-19
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The drones are AI-enabled systems used for offensive military purposes, which inherently carry significant risks of harm including injury, disruption, and violations of international law. The article does not report any actual harm occurring yet but highlights ongoing development and enhancement of AI-powered offensive drones, which plausibly could lead to serious harms in the future. Therefore, this event qualifies as an AI Hazard due to the credible risk posed by the development and deployment of AI-enabled offensive drones by a sanctioned state with military ambitions.
Thumbnail Image

كيم جونغ أون يشرف على اختبار طائرات هجومية مسيّرة

2025-09-19
algeriapressonline.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to enhance the capabilities of attack drones, which are AI systems capable of autonomous or semi-autonomous operation. The development and deployment of AI-enabled military drones with offensive capabilities pose a credible risk of harm, including injury, disruption, or violations of rights, due to their potential use in conflict. Although no specific harm is reported as having occurred yet, the event plausibly leads to future AI incidents involving harm. Therefore, this qualifies as an AI Hazard under the framework, as it involves the development and use of AI systems that could plausibly lead to significant harm.
Thumbnail Image

زعيم كوريا الشمالية يشرف على اختبار مسيّرات هجومية

2025-09-19
الجزيرة نت
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to enhance offensive drones, which are weaponized systems capable of autonomous or semi-autonomous operations. The deployment and further development of such AI-enabled military drones pose a credible risk of causing harm through their use in conflict, including injury or death and broader geopolitical instability. Since the harm is plausible but not described as having already occurred in this specific event, it fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news but concerns the development and testing of AI-enabled offensive military systems with clear potential for harm.
Thumbnail Image

زعيم كوريا الشمالية يتولى قيادة اختبار الطائرات المسيّرة الهجومية - خبرنا

2025-09-19
خبرنا
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into offensive drones, which are explicitly described as being tested and improved with AI capabilities. The drones' military application and potential for lethal use imply a plausible risk of harm (injury, death, or broader conflict-related harm). Since the article does not report any actual harm or incident caused by these AI systems yet, but highlights their strategic deployment and enhancement, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's role is pivotal in enhancing the drones' capabilities, and the context of international conflict and military alliances underscores the credible risk of future harm.
Thumbnail Image

كيم جونغ أون يكشف عن سلاح سري جديد

2025-09-22
سبوتنيك عربي
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems in military weapons, including drones and strategic aircraft, which are explicitly linked to AI development efforts. Although no direct harm is reported yet, the potential for these AI-enabled weapons to cause injury, disruption, or other harms is credible and significant. Therefore, this constitutes an AI Hazard rather than an AI Incident, as the harm is plausible but not yet realized.
Thumbnail Image

Korea Utara Genjot Pengembangan Drone dan Teknologi AI

2025-09-19
IDN Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI as a priority in advancing drone and unmanned vehicle technology for military use, with successful tests demonstrating combat capabilities. The AI system's development and use in weaponized drones directly relate to potential harm, including injury or death and broader security risks. Even if no harm has yet occurred, the nature of these AI-enabled weapons and their deployment constitutes a plausible risk of significant harm, qualifying this event as an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The focus is on development and testing, not on responses or updates, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Kim Jong Un: Pengembangan Drone Berbasis AI Jadi Prioritas Modernisasi Militer Korut

2025-09-20
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and testing of AI-based drones and unmanned vehicles for military purposes in North Korea. Although no direct harm or incident is reported, the development and deployment of AI-enabled military drones inherently carry credible risks of future harm, including injury, violations of human rights, and broader security threats. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future due to the nature of the AI system's intended use and potential misuse in military conflict.
Thumbnail Image

Kim Jong Un Awasi Uji Coba Drone, Minta Pengembangan AI Dikebut

2025-09-19
detik News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration in military drones capable of autonomous attack and reconnaissance functions. Although no direct harm or incident is reported, the development and testing of AI-enabled attack drones inherently carry a credible risk of causing harm in the future, such as injury or death, disruption of security, or escalation of conflict. The event involves the use and development of AI systems with potential for significant harm, fitting the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The focus is on the potential threat and development efforts, not on responses or updates, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Kim Jong Un Makin 'Ngeri', Perintah Militer Kembangkan Drone AI

2025-09-19
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI systems in autonomous attack drones by North Korea, which are tested and shown to destroy targets. This involves AI system use in a military context with lethal potential. Although no specific harm or casualties are reported, the deployment of AI-enabled autonomous weapons capable of attack clearly poses a credible and significant risk of harm to people and communities, fulfilling the criteria for an AI Hazard. The article does not describe an actual incident of harm caused by the AI system but highlights the plausible future harm and strategic military use, which is consistent with the definition of an AI Hazard. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Video: Kim Jong Un Pamer Korut Uji Coba Drone Berteknologi AI

2025-09-19
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to enhance unmanned aerial vehicles with military applications. While no direct harm is reported yet, the development and testing of AI-powered tactical and strategic drones by a regime known for military aggression plausibly pose future risks of harm, such as injury, disruption, or violations of rights. The event involves the use and development of AI systems with potential for misuse or escalation of conflict, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Kim Jong-un Awasi Uji Coba Drone Taktis, Tekankan Pemanfaatan AI

2025-09-19
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI in tactical and strategic drones, which are military AI systems. While no harm has been reported, the development and testing of AI-powered weaponized drones plausibly could lead to harm such as injury, disruption, or violations of rights in the future. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident, as it concerns plausible future harm from AI-enabled military drones but no realized harm is described.
Thumbnail Image

Kim Jong Un Awasi Ketat Pengembangan AI dan Uji Coba Drone

2025-09-19
investor.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being integrated into attack drones that have been tested and used in military operations, with reported casualties among soldiers. The AI system's use in autonomous or semi-autonomous drones directly leads to harm (injury and death), fulfilling the criteria for an AI Incident. The involvement is in the use of AI systems in military drones causing direct harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Sambil Pamer Uji Coba Drone, Kim Jong Un Perintahkan AI di Militer Korut

2025-09-19
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (autonomous military drones) with explicit mention of AI enabling autonomous mission execution. The article does not report a realized harm but emphasizes the potential threat and military strategic importance of these AI-enabled drones. Given the nature of autonomous weapon systems and their potential to cause injury, disruption, and violations of human rights or international law, this situation constitutes an AI Hazard. There is no indication of a current AI Incident or complementary information; rather, it is a credible future risk scenario.
Thumbnail Image

Kim Jong-un Awasi Langsung Uji Coba Drone Serang Taktis Korea Utara

2025-09-19
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into tactical attack drones being tested for combat purposes. The AI system's development and use in autonomous or semi-autonomous weaponry inherently carry risks of injury, harm to people, and disruption of security. While no direct harm is reported from this specific test, the event plausibly leads to future AI incidents involving harm due to the offensive military application. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI involvement is clear and central to the event.
Thumbnail Image

Pamer Drone Tempur, Kim Jong Un Ungkap Akan Kembangkan AI, Tak Mau Kalah dari Negara Lain

2025-09-19
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and planned use of AI in military drones, which are weapon systems. The use of AI in autonomous or semi-autonomous attack drones is widely recognized as a significant hazard due to the potential for lethal harm and escalation of conflict. Although no incident of harm is reported, the event clearly indicates a credible risk that the AI systems could lead to injury, death, or other serious harms in the future. Therefore, this qualifies as an AI Hazard under the framework, as it plausibly could lead to an AI Incident involving harm to persons or communities.
Thumbnail Image

Kim Jong-un Awasi Uji Coba Drone Kumsong di Pangkalan Panghyon

2025-09-21
seputarmiliter.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the integration of AI into military drones and their testing for combat and reconnaissance purposes. These AI-enabled drones, including suicide and swarm types, have a high potential for causing harm if deployed in conflict, thus representing a credible risk of future harm. Since no actual harm or incident is reported yet, but the plausible risk is clear and significant, this event qualifies as an AI Hazard under the OECD framework.
Thumbnail Image

김정은, 전술무인공격기 성능시험 지도..."AI기술 새로 도입" | 연합뉴스

2025-09-18
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in unmanned armed drones, which are military AI systems. While no direct harm or incident is reported, the deployment of AI-enabled tactical attack drones inherently carries a plausible risk of causing harm in the future, such as injury, disruption, or violations of rights due to their military use. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no realized harm is described yet.
Thumbnail Image

김정은, 자폭 무인공격기 시험 지도..."AI기술 급속 발전시켜야"(종합) | 연합뉴스

2025-09-18
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI integration in unmanned attack drones used for military purposes, which are AI systems by definition. The event involves the development and testing of these AI systems, with the potential to cause harm through military use. Although no actual harm or incident is reported, the plausible future harm from autonomous AI-enabled weapons is significant. Therefore, this is classified as an AI Hazard rather than an AI Incident. The article does not focus on responses, remediation, or broader governance, so it is not Complementary Information. It is clearly related to AI systems and their potential for harm, so it is not Unrelated.
Thumbnail Image

김정은, 자폭 무인공격기 시험 지도..."AI기술 급속 발전시켜야"(종합2보) | 연합뉴스

2025-09-19
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the development and testing of AI-enabled unmanned attack drones capable of autonomous or semi-autonomous military operations. The drones are suicide attack types, implying direct harm potential to people and property. The involvement of AI technology in these weapons and their demonstrated operational testing under leadership supervision indicates realized use of AI systems with direct links to harm. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to potential or actual harm through military aggression and violence. The article does not merely warn of future risks but reports active deployment and testing, confirming incident status rather than hazard or complementary information.
Thumbnail Image

[영상] 북한판 '글로벌호크' '하롭·히어로'...북, 무인기 성능시험 | 연합뉴스

2025-09-19
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses AI technology integration into unmanned combat and reconnaissance drones. The event is about development and testing (use) of these AI-enabled systems. No actual harm or incident is reported, but the military application of AI drones inherently carries credible risks of injury, disruption, or other harms. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI involvement and plausible future harm are evident.
Thumbnail Image

김정은, 美 글로벌호크 빼닮은 무인기 성능 시험 참관... "AI 기술 새로 도입

2025-09-19
Chosun.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI technology being newly introduced and rapidly developed for unmanned armed drones, which are military systems capable of autonomous or semi-autonomous operation. The article does not report any actual harm or incident but highlights the strategic military value and enhancement of combat effectiveness through AI. Given the nature of AI-enabled weapon systems, their development and testing represent a credible risk of future harm, such as injury or violations of human rights, thus fitting the definition of an AI Hazard. There is no indication of realized harm or incident yet, so it is not an AI Incident. It is more than general AI news or complementary information because the focus is on the development and testing of AI-enabled military drones with potential for harm.
Thumbnail Image

北, 모자이크 없이 무인공격기 공개... AI기술 · 작전 능력 등 고도화 과시

2025-09-19
문화일보
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems integrated into unmanned attack drones used for military purposes. The article describes the development and operational testing of these AI-enabled drones, which are capable of autonomous or semi-autonomous attack missions. Although no specific harm is reported as having occurred yet, the deployment and demonstration of such AI-powered lethal drones present a credible risk of future harm, including injury, death, and disruption of security. Therefore, this qualifies as an AI Hazard under the framework, as the AI system's use could plausibly lead to an AI Incident involving significant harm.
Thumbnail Image

우크라서 현대전 익힌 北, 모자이크 없이 자폭 무인기 2종 공개

2025-09-19
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the integration of AI technology in North Korea's kamikaze drones, which are designed to autonomously identify and attack targets, causing physical destruction. While the article does not describe any actual use or harm caused by these drones, the development and testing of such AI-enabled autonomous weapons present a credible risk of future harm to people and property. The AI system's involvement is in the development and use phases, with plausible future harm from their deployment in military conflict. Since no actual harm has been reported yet, this fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

北 김정은, '금성' 자폭 드론 시험 참관..."소규모 도발 신호

2025-09-19
국민일보
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into unmanned armed drones capable of precision attacks and suicide missions, which directly relate to harm to persons and military infrastructure. The article describes actual tests and deployment of these AI-enabled drones, indicating realized or imminent harm potential. The AI system's role in enabling autonomous or semi-autonomous lethal operations meets the criteria for an AI Incident, as the harm is direct or imminent. The military context and explicit mention of AI technology in these drones confirm the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

김정은, 자폭 무인기 시험지도..."무력 현대화 우선 과제" | 한국일보

2025-09-19
한국일보
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into military drones designed for suicide attacks, which have been tested and demonstrated to strike targets. This clearly involves AI system use in a context that can cause injury, harm, or destruction, fulfilling the criteria for an AI Incident. The article describes realized harm potential through the deployment and testing of these armed AI-enabled drones, which are a direct threat to human safety and security. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

북한, 자폭 무인공격기 '금성' 첫 공개... "김정은 동지, 커다란 만족 표시

2025-09-19
�����
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI-enabled autonomous armed drones capable of lethal attacks, which have been tested and demonstrated to strike targets. This directly relates to harm to persons and communities through military violence. The AI system's role in enabling autonomous targeting and attack is pivotal. Therefore, this is an AI Incident due to the realized use of AI in lethal autonomous weapons causing or enabling harm.
Thumbnail Image

김정은, 무인기 성능 시험 참관...'금성' 계열 첫 언급 - 정치 | 기사 - 더팩트

2025-09-19
더팩트
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI-enabled unmanned armed drones with enhanced operational capabilities, which are intended for military use. The article explicitly mentions the integration of AI technology in these systems and the prioritization of their advancement. Although no direct harm is reported yet, the development and potential deployment of such AI-powered autonomous weapons pose a credible risk of harm to people and communities, fitting the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The focus is on the plausible future harm from these AI-enabled military systems.
Thumbnail Image

김정은, 전술무인공격기 성능시험 지도..."인공지능 기술 새로 도입"

2025-09-18
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into tactical unmanned attack drones, as explicitly stated. The AI's role in enhancing combat capabilities of these drones indicates a direct link to potential military harm. Since the article describes ongoing performance tests and the strategic importance of AI in these weapons, it fits the definition of an AI Hazard—an event where AI system use could plausibly lead to harm. There is no indication that harm has already occurred, so it is not an AI Incident. The focus is on the development and testing phase with potential future risks, not on societal responses or complementary information. Therefore, the classification is AI Hazard.
Thumbnail Image

김정은, 전술무인공격기 성능시험 지도..."AI기술 새로 도입" | 아주경제

2025-09-19
아주경제
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI-enabled tactical unmanned attack drones, which are AI systems used in military applications. The article does not report any actual harm or incident resulting from these systems yet, but the introduction and enhancement of AI in armed drones is a credible and significant risk that could plausibly lead to harm such as injury, disruption, or violations of human rights. Hence, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the development and testing of AI military systems with potential for harm, not on responses or updates to past incidents. It is not unrelated because AI involvement is explicit and central.
Thumbnail Image

김정은, '금성' 자폭무인기 첫 공개...'AI 접목' 강조

2025-09-19
아시아투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into autonomous or semi-autonomous armed drones ('Geumseong' suicide drones). The article explicitly mentions AI technology adoption to enhance combat capabilities, indicating AI system involvement. The drones have been tested and demonstrated striking targets, which directly implies harm to property and potential harm to persons and communities. The military use of AI-enabled lethal autonomous weapons is a recognized source of significant harm and legal concern. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm or the credible realization of harm through military attacks.
Thumbnail Image

김정은, 美 표적 두고 자폭 무인기 시험...AI 탑재 가능성도 | 중앙일보

2025-09-19
중앙일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the probable integration of AI technology in North Korea's suicide drones, which are tested to autonomously identify and attack targets resembling U.S. military assets. The use of AI in lethal autonomous weapons systems directly relates to harm to persons and communities (harm category a and d). The event describes actual testing and deployment, not just potential or hypothetical risks, indicating realized harm or imminent risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The AI system's use in military attacks and the potential for causing injury or death is a direct harm linked to the AI system's use.
Thumbnail Image

김정은, 6개월만에 또 자폭무인기 시험 참관..."AI 기술 도입"

2025-09-19
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as integrated into unmanned attack drones (suicide drones) with autonomous operational capabilities. The development and testing of these AI-enabled military drones could plausibly lead to harm including injury or death, disruption of security, and escalation of conflict. Since no actual harm or incident is reported, but the potential for significant harm is credible and highlighted, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the development and testing phase and the strategic emphasis on AI, fitting the definition of an AI Hazard.
Thumbnail Image

كيم يشرف على اختبار أداء مسيرات هجومية تكتيكية

2025-09-19
تورس
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into military drones, which are offensive weapons. The deployment and testing of AI-powered autonomous or semi-autonomous weapon systems directly relate to potential harm, including injury or harm to persons and disruption of security. Although no specific harm is reported as having occurred yet, the nature of these AI-enabled offensive drones and their military application plausibly pose significant risks of harm. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm stemming from the development and operationalization of AI-powered tactical offensive drones.
Thumbnail Image

الزعيم الكوري الشمالي يشرف على اختبار أداء مسيرات هجومية | شبكة الإعلام العراقي

2025-09-19
شبكة الاعلام العراقي
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI-enabled unmanned aerial vehicles (drones) with offensive military capabilities. The article indicates active testing and strategic military use, which implies a direct link to potential harm through military conflict or escalation. The development and deployment of AI-powered offensive drones constitute a plausible AI Hazard due to their potential to cause harm in warfare. However, the article does not report any actual harm or incident resulting from these AI systems yet, only tests and development. Therefore, this is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

ما قاله زعيم كوريا الشمالية خلال اختبار أداء مسيرات هجومية تكتيكية

2025-09-19
الوفد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in unmanned aerial vehicles (drones) designed for offensive military operations, including suicide attack drones. These are AI systems as they involve autonomous or semi-autonomous decision-making capabilities in a military context. The event involves the use and testing of these AI systems, which are weapons with high potential for misuse and harm. While no direct harm is reported yet, the nature of these AI-enabled weapons and their operational testing constitute a credible and plausible risk of causing injury, harm to persons, or disruption of security, qualifying this as an AI Hazard under the framework. There is no indication of an actual incident of harm yet, so it is not an AI Incident. The event is not merely complementary information or unrelated news, as it concerns the development and testing of AI systems with clear potential for harm.
Thumbnail Image

من أصول الحرب الحديثة.. زعيم كوريا الشمالية يُشرف على اختبار أداء "المسيرات الهجومية التكتيكية" (تفاصيل) | المصري اليوم

2025-09-19
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the application and prioritization of AI technology in unmanned military drones and weapons systems, which are AI systems by definition. Although no actual harm or incident is reported, the development and enhancement of AI-powered offensive drones in a hostile geopolitical context plausibly could lead to harm such as injury, disruption, or violations of human rights. The event is about the development and use of AI systems with high potential for misuse and harm, fitting the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the focus is on the AI-enabled military technology development and its implications for future harm.
Thumbnail Image

الزعيم الكوري الشمالي يُشرِف على اختبار أداء مُسَيّرات هجومية تكتيكية

2025-09-19
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies enhancing the operational capabilities of unmanned offensive drones. These drones qualify as AI systems due to their autonomous or semi-autonomous nature and tactical offensive use. Although no incident of harm is reported, the development and testing of such AI-enabled weapons constitute a credible potential threat that could plausibly lead to AI incidents involving injury, disruption, or violations of human rights. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident, as harm is plausible but not yet realized.
Thumbnail Image

زعيم كوريا الشمالية يختبر مسيرات هجومية ويطالب بتطويرها بالذكاء الاصطناعي - الوطن

2025-09-19
الوطن
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI-enabled military drones, which are AI systems by definition due to their autonomous or semi-autonomous operational capabilities. The article does not describe any realized harm or incident caused by these AI systems but highlights the prioritization of AI integration in offensive drones, which plausibly could lead to harm such as injury, disruption, or violations of rights in conflict scenarios. Therefore, this qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.
Thumbnail Image

"ليست نووية".. زعيم كوريا الشمالية يشرف على مفاجأة عسكرية

2025-09-19
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in unmanned offensive drones, including suicide attack drones, which are autonomous weapons systems. The development and testing of such AI-enabled military drones inherently carry a credible risk of harm to people and security if deployed in conflict. Although no actual harm is reported in this article, the event plausibly could lead to AI Incidents involving injury or death and disruption of security infrastructure. Hence, it fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to significant harm. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated.
Thumbnail Image

كيم يشرف على اختبار أداء مسيرات هجومية تكتيكية

2025-09-19
Babnet Tunisie
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI-enabled offensive drones, which are AI systems by definition due to their autonomous or semi-autonomous operational capabilities. Although no harm is reported as having occurred yet, the nature of these AI systems as offensive weapons implies a plausible risk of future harm, such as injury or disruption, qualifying this as an AI Hazard rather than an Incident or Complementary Information. There is no indication of realized harm or societal/governance response, so it is not an Incident or Complementary Information.
Thumbnail Image

زعيم كوريا الشمالية يشرف على إختبار مُسيّرات هجومية تكتيكية - قناة العالم الاخبارية

2025-09-19
قناة العالم الاخبارية
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into tactical offensive drones, which are military AI systems. Although the article does not describe any realized harm, the nature of these AI-enabled weapons and their strategic military use plausibly could lead to significant harms such as injury, disruption, or violations of human rights. Therefore, this qualifies as an AI Hazard due to the credible risk posed by the development and enhancement of AI-powered offensive unmanned systems.
Thumbnail Image

الزعيم الكوري الشمالي يؤكد على ضرورة تعزيز القدرات التشغيلية للمسيرات الهجومية التكتيكية

2025-09-19
Qatar News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems integrated into tactical attack drones and unmanned vehicles used for military purposes, which are designed to cause harm in conflict. The development, testing, and deployment of AI-enabled autonomous or semi-autonomous weapons directly relate to potential or actual injury, disruption, and violations of human rights and international law. The article mentions past tests of AI-powered suicide attack drones and their use in active conflict (supporting Russia in Ukraine), indicating realized harm or at least direct involvement in conflict scenarios. Given the direct link between AI system use and military harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"كيم جونغ-أون" يشرف على اختبار أداء مسيرات هجومية تكتيكية | وكالة يونهاب للانباء

2025-09-19
وكالة يونهاب للأنباء
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI-enabled unmanned aerial vehicles (drones) for offensive military purposes by North Korea. The article explicitly mentions AI technology integration and the operational testing of these drones, which are intended for combat and reconnaissance roles. Given the military application and the potential for these AI systems to cause injury, death, or broader harm in conflict zones, this qualifies as an AI Incident under the framework, as harm is directly linked to the AI system's use in weaponry. The article describes actual tests and deployment, not just potential future risks, indicating realized harm potential rather than a mere hazard.
Thumbnail Image

(جديد) "كيم جونغ-أون" يشرف على اختبار أداء مسيرات هجومية تكتيكية | وكالة يونهاب للانباء

2025-09-19
وكالة يونهاب للأنباء
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in offensive tactical drones and suicide attack drones, which are military AI systems with high potential for harm. While the article describes tests and development rather than actual deployment causing harm, the nature of these AI systems and their intended use in combat plausibly could lead to injury, death, or disruption. Hence, it fits the definition of an AI Hazard. There is no indication of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information because it focuses on the development and testing of AI-enabled weapon systems with clear potential for harm, nor is it unrelated as it clearly involves AI systems.
Thumbnail Image

Β.Κορέα: Ο Κιμ Γιονγκ Ουν επέβλεψε άνευ προηγουμένου δομικές drones εφόρμησης - iefimerida.gr

2025-09-19
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology in military drones capable of attack missions. While no actual incident of harm is reported, the nature of the AI system (attack drones) and the context (military use, combat effectiveness) imply a credible risk of future harm, including injury or death and disruption of security. The event is therefore best classified as an AI Hazard, as it plausibly could lead to an AI Incident involving harm to people or communities. There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information or unrelated, as the AI system's development and use for attack drones is central and poses a credible threat.
Thumbnail Image

Βόρεια Κορέα: "Κορυφαία προτεραιότητα" η ανάπτυξη στρατιωτικών drones με τη χρήση AI | Η ΚΑΘΗΜΕΡΙΝΗ

2025-09-19
H Kαθημερινή
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in the development of military drones by North Korea, which is a clear example of an AI system with high potential for misuse and harm. The focus is on the development and testing of these AI-enabled weapon systems, which could plausibly lead to incidents involving injury, violations of human rights, or disruption of security. Since no actual harm has been reported yet, but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Βόρεια Κορέα: Δοκιμές drones εφόρμησης υπό την επίβλεψη Κιμ

2025-09-19
Η Ναυτεμπορική
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology for military drones capable of attack missions, which are AI systems by definition. While no direct harm is reported from the tests themselves, the deployment of AI-enabled attack drones inherently carries a credible risk of causing injury, harm to communities, and escalation of conflict. The event is about the development and testing phase, indicating plausible future harm rather than realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Ο Κιμ Γιονγκ Ουν παρακολουθεί δομικές drones εφόρμησης και καλεί να αναπτυχθούν με αξιοποίηση της ΤΝ

2025-09-19
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology for military drones capable of attack, which are AI systems by definition. The event involves the use and development of these AI systems with the potential to cause harm in armed conflict. While no actual harm is reported yet, the deployment of AI-powered attack drones clearly poses a plausible risk of injury, harm to communities, and violations of human rights. This fits the definition of an AI Hazard, as the event could plausibly lead to an AI Incident. There is no indication that harm has already occurred, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it concerns the development and intended use of AI systems with significant potential for harm.
Thumbnail Image

Βόρεια Κορέα: Ο Κιμ παρακολουθεί δοκιμές drones εφόρμησης

2025-09-19
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI systems integrated into military drones designed for swarm attacks. These AI systems are used for autonomous or semi-autonomous targeting and attack, which can directly lead to harm in conflict scenarios. The article highlights the strategic military advantage and prioritization of AI technology in weaponry, indicating a credible risk of future harm. Although no specific incident of harm is reported yet, the nature of the AI system and its intended use plausibly lead to significant harm, qualifying this as an AI Hazard under the framework.
Thumbnail Image

Βόρεια Κορέα: Ο Κιμ Γιονγκ Ουν παρακολούθησε δοκιμές drones εφόρμησης και καλεί να αναπτυχθούν με Τεχνητή Νοημοσύνη

2025-09-19
parapolitika.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technologies for military drones capable of attack missions, which qualifies as AI system involvement. Although no direct harm has been reported from these tests, the nature of the AI system's intended use in military aggression and the leader's call for rapid AI development for such purposes indicate a credible risk of future harm. This aligns with the definition of an AI Hazard, as the event plausibly could lead to injury or harm to people through autonomous weapons deployment. There is no indication of actual harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the potential risks of AI in military drones.
Thumbnail Image

Ο Κιμ Γιονγκ Ουν παρακολουθεί δοκιμές drones και καλεί να αναπτυχθούν με αξιοποίηση της Τεχνητής Νοημοσύνης

2025-09-19
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as the leader calls for the use of AI technology to enhance attack drones. The use of AI in autonomous or semi-autonomous weapons is well-known to pose significant risks of harm, including injury or death in conflict zones, and disruption to communities. Since the article describes tests and plans for rapid deployment, but does not report actual harm yet, this fits the definition of an AI Hazard: an event where AI system development and use could plausibly lead to harm. The military context and the nature of the AI application (attack drones) make the potential for harm credible and significant.
Thumbnail Image

Ο Κιμ Γιονγκ Ουν παρακολουθεί δομικές drones εφόρμησης, καλεί να αναπτυχθούν

2025-09-19
Offsite
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in military drones capable of attack missions, which are tested and considered to provide a significant military advantage. The development and deployment of AI-enabled attack drones pose a credible risk of harm, including injury or death in conflict zones, and disruption related to military operations. Although no specific incident of harm is reported, the event describes the development and intended use of AI-powered autonomous weapons, which could plausibly lead to AI incidents involving harm. Therefore, this qualifies as an AI Hazard under the framework, as it concerns the plausible future harm from AI systems in military drones.
Thumbnail Image

Ο Κιμ Γιονγκ Ουν παρακολουθεί δομικές drones εφόρμησης, καλεί να αναπτυχθούν

2025-09-19
Cyprus News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions drones of attack type and the encouragement to develop and use AI for military purposes. Attack drones typically rely on AI for navigation, targeting, and autonomous operation. The event concerns the development and use of AI systems with high potential for misuse and harm. Since no actual harm is reported yet, but the plausible future harm is credible and significant, this qualifies as an AI Hazard under the definitions provided.
Thumbnail Image

Βόρεια Κορέα: Ο Κιμ παρακολουθεί δοκιμές drones εφόρμησης

2025-09-19
Ηλεκτρονική Πύλη ikypros
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (attack drones with AI capabilities) used for military purposes. The article describes ongoing tests and plans for rapid development and production, indicating active use and development of AI systems. While no actual harm is reported yet, the nature of AI-powered attack drones inherently carries a credible risk of causing injury, death, and violations of human rights if deployed in conflict. Therefore, this situation constitutes an AI Hazard due to the plausible future harm from these AI-enabled military drones.
Thumbnail Image

Β. Κορέα: Ο Κιμ Γιονγκ Ουν ανεβάζει ταχύτητα στα drones επίθεσης - Δοκιμές με τεχνητή νοημοσύνη και ρωσική τεχνογνωσία

2025-09-19
emakedonia.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered attack drones being tested and developed by North Korea, with the leader emphasizing their military advantage and priority development. The use of AI in autonomous attack drones inherently carries a plausible risk of causing harm (injury, death, disruption) if deployed in conflict. Although no specific incident of harm is reported, the development and testing of such AI-enabled weapons systems constitute an AI Hazard because they could plausibly lead to AI Incidents involving significant harm. Therefore, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Βόρεια Κορέα: Ο Κιμ παρακολουθεί δοκιμές drones εφόρμησης - Καλεί να αναπτυχθούν με αξιοποίηση της τεχνητής νοημοσύνης

2025-09-19
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI technology for military drones capable of attack, which are inherently hazardous due to their potential to cause injury, death, and broader harm. The event involves the use and development of AI systems (autonomous or semi-autonomous attack drones). While no direct harm is reported yet, the plausible future harm from such AI-enabled weapons systems is significant and credible. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential but not yet realized.
Thumbnail Image

Kim Jong Un supraveghează testarea de noi "drone sinucigaşe" şi mizează pe inteligența artificială pentru modernizarea armatei

2025-09-19
Digi24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in suicide drones tested by North Korea, which are autonomous or semi-autonomous weapon systems capable of lethal attacks. The development and deployment of such AI-enabled weapons inherently carry a credible risk of causing injury or harm to people and disruption to security, fulfilling the criteria for an AI Hazard. Since no actual harm or incident is reported, but the plausible future harm is clear and significant, this event is best classified as an AI Hazard.
Thumbnail Image

Coreea de Nord testează drone tactice de atac, cel mai probabil bazate pe tehnologie rusească

2025-09-19
Stirile ProTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-equipped kamikaze drones capable of detecting and attacking targets, which qualifies as AI systems used in military applications. The testing and deployment of these drones directly relate to potential harm (injury, death, conflict escalation). Since the article does not report actual harm from these drones but highlights their testing and strategic importance, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to people and communities. The mention of the alliance with Russia and deployment of soldiers adds context but does not change the classification.
Thumbnail Image

Ce a ordonat Kim Jong-un imediat după ce a urmărit un test cu drone în Coreea de Nord

2025-09-19
Libertatea
Why's our monitor labelling this an incident or hazard?
The involvement of AI in autonomous military drones with attack capabilities directly relates to the development and use of AI systems that can cause harm to people and communities through military conflict. The article indicates ongoing deployment and testing of such AI-enabled weapons, which constitute a direct AI Incident due to the realized harm potential and the strategic military use of AI-powered lethal autonomous systems. Although no specific harm event is described, the active use and testing of AI-powered attack drones in a hostile context meets the criteria for an AI Incident because the AI system's use directly leads to harm or threat of harm to people and communities.
Thumbnail Image

FOTO Kim Jong Un a supervizat testarea unor drone fără pilot. Dezvoltarea inteligenței artificiale pentru aparatele de zbor este prioritatea militară principală - HotNews.ro

2025-09-19
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI development for military drones capable of autonomous attack missions, which are inherently capable of causing injury or harm to persons and communities. The use of AI in these drones to conduct lethal operations without human intervention poses a credible risk of harm. Although no actual harm or incident is reported, the development and testing of such AI systems for military use constitute a plausible future risk of AI-related harm. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kim Jong-un a urmărit testarea unor drone misterioase și a ordonat consolidarea acestor capabilităţi prin intermediul inteligenței artificiale

2025-09-19
Stiri pe surse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled drones being tested and the leader's directive to strengthen these capabilities. While no harm is reported as having occurred yet, the nature of AI-powered kamikaze drones inherently carries a credible risk of causing injury or harm in the future. Therefore, this event fits the definition of an AI Hazard rather than an Incident, as the harm is plausible but not yet realized.
Thumbnail Image

Prezent la testele cu noile drone ale armatei nord-coreene, Kim Jong Un a cerut dezvoltarea rapidă de modele cu inteligență artificială

2025-09-19
Gândul
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and testing of drones equipped with AI for military purposes by North Korea, a context that inherently involves AI systems. Although no direct harm or incident is reported, the militarization of AI drones and the call for rapid AI development and mass production indicate a credible risk of future harm, such as armed conflict escalation, border violations, or unintended casualties. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to injury, disruption, or violations of rights. There is no indication of a realized incident or complementary information about responses or mitigation, so AI Hazard is the appropriate classification.
Thumbnail Image

Kim Jong Un a declarat dezvoltarea dronelor cu inteligență artificială "prioritate militară națională" în Coreea de Nord - Aktual24

2025-09-19
Aktual24
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of AI-equipped autonomous drones for military use, which qualifies as an AI system under the definitions. While no direct harm has yet occurred, the article clearly indicates that these AI systems could plausibly lead to significant harm in the future, including injury, disruption, or violations of rights due to their military application. Therefore, this situation fits the definition of an AI Hazard, as it describes credible potential for harm stemming from the AI system's development and intended use.
Thumbnail Image

Kim Jong Un a supervizat testarea unor drone fără pilot. Dezvoltarea inteligenței artificiale pentru aparatele de zbor este prioritatea militară principală

2025-09-19
Profit.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-enabled drones) used in a military context with demonstrated attack capabilities. While no actual harm is reported, the development and testing of such AI-powered weapon systems plausibly could lead to injury, disruption, or other harms. The article focuses on the development and testing phase and the prioritization of AI for military drones, indicating a credible risk of future harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kim Jong Un mizează pe implementarea AI în modernizarea armatei

2025-09-21
CugetLiber.ro
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (AI-enabled drones) for military purposes, which inherently carry a credible risk of leading to harm such as escalation of conflict, harm to persons, or disruption of critical infrastructure. Although no specific harm has yet occurred or been reported, the nature of the AI system's development and intended use plausibly leads to significant future harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not realized at this stage.