LIG Nex1 Unveils AI-Powered Swarm Suicide Drones at DSK 2026

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

LIG Nex1 publicly unveiled AI-based swarm suicide drones at the DSK 2026 exhibition in Busan, South Korea. Developed with the Agency for Defense Development, these autonomous drones are designed for coordinated military operations, raising credible concerns about future risks of harm from AI-enabled lethal weapon systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly described as AI-based swarm drones for military use, which fits the definition of an AI system. The article focuses on the development and first public display of these systems, with no mention of any harm or incidents caused by them. Given the nature of autonomous military drones with swarm capabilities, there is a credible risk that their deployment could lead to harms such as injury, violation of rights, or harm to communities in the future. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
SafetyDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rights

Severity
AI hazard

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

LIG넥스원, DSK 2026서 AI 군집무인기 첫 공개

2026-02-25
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-based swarm drones for military use, which fits the definition of an AI system. The article focuses on the development and first public display of these systems, with no mention of any harm or incidents caused by them. Given the nature of autonomous military drones with swarm capabilities, there is a credible risk that their deployment could lead to harms such as injury, violation of rights, or harm to communities in the future. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI 품은 K-드론, 부산 벡스코 수놓는다...대한항공·KAI·LIG넥스원, 첨단 기술 격돌

2026-02-25
에너지경제신문
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into various drones and air traffic management platforms, indicating AI system involvement. While no actual harm or incident is reported, the showcased AI military drones, autonomous swarm drones, and AI traffic control systems have clear potential to cause harm if misused or malfunctioning, such as injury, disruption, or violations of rights. The event is a public exhibition of these technologies, not a report of an incident or a response to one. Hence, it does not qualify as an AI Incident or Complementary Information. Given the credible potential for future harm from these AI-enabled systems, especially in military applications, the classification as an AI Hazard is appropriate.
Thumbnail Image

LIG넥스원, 'DSK 2026' 참가...AI 기반 군집 무인기 첫 공개

2026-02-25
이투데이
Why's our monitor labelling this an incident or hazard?
The event involves the development and public unveiling of AI-based autonomous swarm drones intended for military use, which are AI systems by definition due to their autonomous operational capabilities and AI involvement in swarm coordination. While the article does not describe any realized harm or incident, the nature of these AI systems—autonomous lethal drones—implies a credible risk of future harm, including injury or death, disruption, or violations of rights. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident, as it plausibly could lead to harm but no harm has yet occurred or been reported.
Thumbnail Image

"AI 기반 군집 자폭형 드론"...LIG넥스원, DSK 2026 참가 | 아주경제

2026-02-25
아주경제
Why's our monitor labelling this an incident or hazard?
The event involves the development and public unveiling of an AI-based autonomous swarm suicide drone, which is an AI system designed for lethal military applications. Although no harm has yet occurred, the nature of the system and its intended use imply a credible risk of causing injury or harm in the future. This fits the definition of an AI Hazard, as the AI system's development and potential use could plausibly lead to an AI Incident involving harm to persons or disruption of critical infrastructure in conflict scenarios. There is no indication of an actual incident or harm yet, nor is the article primarily about responses or governance, so it is not an AI Incident or Complementary Information.
Thumbnail Image

LIG넥스원, 'DSK 2026'서 AI 기반 군집 무인기 공개

2026-02-25
inews24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into swarm drones designed for military use, which fits the definition of an AI system. There is no indication that any harm has yet occurred, so it is not an AI Incident. However, the development and public unveiling of such AI-enabled autonomous weapon systems with offensive capabilities constitute a credible potential for harm, qualifying it as an AI Hazard. The article focuses on the unveiling and capabilities rather than any harm or mitigation, so it is not Complementary Information. It is clearly related to AI and potential harm, so it is not Unrelated.
Thumbnail Image

LIG넥스원, 드론쇼코리아 참가...AI 기반 군집 무인기 공개

2026-02-25
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-based swarm drone system designed for military applications, including self-destructing small drones capable of coordinated swarm operations. Although no incident or harm has occurred yet, the nature of the AI system and its intended use in combat scenarios imply a credible risk of future harm. The event is about the development and public display of this AI-enabled military technology, which fits the definition of an AI Hazard as it could plausibly lead to injury, disruption, or other harms if deployed.
Thumbnail Image

LIG넥스원 'DSK 2026' 참가, AI 기반 군집 무인기 첫 공개

2026-02-25
kr.acrofan.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based swarm unmanned drones with autonomous capabilities for military applications, including suicide drones. Although no incident of harm is reported, the development and public unveiling of such AI-enabled autonomous weapons systems inherently carry a credible risk of future harm, such as injury or loss of life, disruption, or violations of human rights. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

LIG넥스원, 'DSK 2026' 참가...AI 군집 무인기 첫 공개

2026-02-25
NewsTomato
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-based swarm drones with autonomous capabilities, including suicide drones. Although no harm has yet occurred, the nature of these AI systems—autonomous lethal drones—poses a credible risk of causing injury, violations of rights, or other significant harms if deployed or misused. The article focuses on the development and first public unveiling, not on any incident or harm already caused. Hence, it fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident in the future.
Thumbnail Image

LIG넥스원, 'DSK 2026' 참가...미래 전투체계 비전제시

2026-02-25
매일일보
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-based swarm drones and advanced electro-optical systems). There is no indication of any realized harm or incident caused by these systems yet. The article focuses on the development and exhibition of these AI-enabled military technologies, which could plausibly lead to harm in the future given their military nature and capabilities. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

대한항공·LIG넥스원·KAI, '드론쇼코리아' 총출동...AI 무인기 공개

2026-02-25
디지털데일리
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-based unmanned drones and autonomous systems) and their development and use, but there is no indication that these systems have caused any injury, rights violations, disruption, or other harms. The article is primarily about the announcement and demonstration of AI-enabled drone technologies at an industry exhibition, which is informative and forward-looking. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on AI system developments and their potential implications in aerospace and defense sectors.
Thumbnail Image

LIG넥스원, 'DSK 2026'서 AI 기반 군집 자폭 무인기 최초 공개

2026-02-25
포인트데일리
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-based swarm suicide drones and other autonomous unmanned aerial vehicles with military applications. Although no harm has yet occurred, the nature of these AI systems—autonomous lethal drones—implies a credible risk of future harm, including injury or death and other serious consequences. The article focuses on the development and exhibition of these AI-enabled military systems, which fits the definition of an AI Hazard as it plausibly could lead to an AI Incident in the future. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the AI system's potential for harm is central to the report.
Thumbnail Image

'AI로 떼지어 적 타격'⋯ LIG넥스원, 드론쇼코리아서 '군집 자폭 무인기' 첫 공개

2026-02-25
브릿지경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to enable coordinated swarm attacks by multiple drones, which qualifies as an AI system. The event concerns the development and public display of this AI-enabled military technology, which has a clear potential to cause harm (injury or death) if used in conflict. Since no actual harm has occurred yet, but the plausible future harm is credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the unveiling of a potentially harmful AI system, not on responses or updates to past incidents.