Taiwan tests autonomy and anti-jamming in naval USV trials, SeaShark 800 enters Kaiqi contest

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Taiwan’s Chungshan Institute sailed its autonomous attack USV “Kaiqi” alongside electronics ship “Haihu 1” off Su-ao to evaluate anti-jam performance. Concurrently, Thunder Tiger’s AI-enabled suicide USV “SeaShark 800,” featuring 50-knot speed, 600 km range, AI target recognition and swarm control, has registered for the Kaiqi project’s upcoming trials.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as an autonomous unmanned attack boat equipped with AI target recognition and swarm control capabilities. The system is intended for lethal military applications, which inherently carry risks of injury, death, and disruption of critical infrastructure (naval assets). While no harm has yet occurred, the development and imminent testing of such a system plausibly could lead to AI incidents involving physical harm and security threats. Therefore, this qualifies as an AI Hazard due to the credible potential for significant harm stemming from the AI system's use in autonomous lethal operations.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehiclesDigital security

Affected stakeholders
General public

Harm types
Physical (injury)Physical (death)Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Research and developmentMonitoring and quality control

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

無人艇「快奇專案」6月蘇澳外海比武 雷虎「海鯊號」確定參戰

2025-04-25
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous unmanned attack boat equipped with AI target recognition and swarm control capabilities. The system is intended for lethal military applications, which inherently carry risks of injury, death, and disruption of critical infrastructure (naval assets). While no harm has yet occurred, the development and imminent testing of such a system plausibly could lead to AI incidents involving physical harm and security threats. Therefore, this qualifies as an AI Hazard due to the credible potential for significant harm stemming from the AI system's use in autonomous lethal operations.
Thumbnail Image

雷虎自殺式無人快艇 正式報名中科院「快奇專案」 - 自由財經

2025-04-25
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into an autonomous suicide attack USV with capabilities such as AI target recognition and swarm control. The system is designed to carry explosives and conduct attacks autonomously or via remote control. While no harm has yet occurred, the nature of the system and its intended use in military attacks clearly pose a credible risk of harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the development and deployment of this AI-enabled lethal autonomous weapon system.
Thumbnail Image

雷虎科技自殺式無人快艇「海鯊號 SeaShark 800」亮相 | 聯合新聞網

2025-04-25
UDN
Why's our monitor labelling this an incident or hazard?
The SeaShark 800 USV is an AI system with autonomous navigation and AI target recognition designed for lethal military applications (suicide attack unmanned boat). Its development and testing for offensive military use inherently carry a credible risk of causing harm to persons and disruption of critical infrastructure or military operations. Since the article focuses on the system's capabilities and upcoming performance tests without reporting any actual harm yet, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's role is pivotal in enabling autonomous lethal attacks, making the event a clear AI Hazard.
Thumbnail Image

雷虎自殺式無人快艇亮相 聲稱3特點超出中科院要求標準 | 聯合新聞網

2025-04-25
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having advanced AI target recognition and autonomous control capabilities integrated into a suicide attack unmanned boat. While no harm has yet occurred, the system's intended use as an autonomous weapon capable of lethal attacks clearly presents a credible risk of causing injury, violation of rights, and other harms in the future. The article focuses on the system's capabilities and testing rather than any realized incident, fitting the definition of an AI Hazard rather than an AI Incident. The autonomous lethal nature and swarm control features elevate the risk profile, justifying classification as an AI Hazard.
Thumbnail Image

雷虎秀新品 搶軍工財 | 集中市場 | 證券 | 經濟日報

2025-04-25
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of an AI system (an autonomous unmanned attack boat) intended for military use. Although no incident or harm has been reported yet, the nature of the system and its intended use in attack operations imply a credible risk of future harm. Therefore, this qualifies as an AI Hazard under the framework, as the AI system's development and potential use could plausibly lead to an AI Incident involving injury, disruption, or other harms.
Thumbnail Image

獨家》「海鵠1號」現蹤蘇澳外海 幫「快奇」無人艇練抗干擾 - 自由軍武頻道

2025-04-24
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The "Kuaiqi" unmanned boat qualifies as an AI system due to its autonomous navigation and remote control capabilities. The event involves the use and testing of this AI system's anti-interference performance in a controlled maritime environment. Although no direct harm or incident has occurred, the nature of the system and its military application imply a credible risk of future harm, such as accidents, misuse, or escalation in military conflicts. The article does not report any realized harm or incident but highlights a test that could plausibly lead to an AI-related incident in the future. Hence, the event is best classified as an AI Hazard.
Thumbnail Image

雷虎自殺式無人快艇亮相 3特點超出中科院要求

2025-04-25
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The USV described is an AI system due to its autonomous navigation, AI target recognition, and swarm control capabilities. It is intended for use as a suicide attack drone carrying explosives, which inherently involves potential harm to human life and property. Since the article discusses the system's development and upcoming testing but does not report any actual harm yet, it constitutes an AI Hazard. The plausible future harm includes injury or death, disruption, and violation of rights due to its military offensive use. Therefore, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.