Taiwan to Deploy AI-Enabled Military Drones with Autonomous Target Recognition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Taiwan's Ministry of National Defense will procure 1,779 military drones from local firms, integrating Acer's AI technology for autonomous object recognition, enemy identification, and self-directed flight. The AI systems, to be tested soon, aim to enhance battlefield awareness but raise concerns about potential risks in military applications.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use and development of an AI system for military drones with autonomous capabilities and automatic enemy identification, which is a clear AI system involvement. However, no harm or incident has occurred yet; the article discusses planned testing and capabilities. Given the military application and autonomous nature, there is a credible risk that such AI systems could lead to harm in the future (e.g., misidentification causing injury or escalation). Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceMobility and autonomous vehiclesRobots, sensors, and IT hardwareDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Monitoring and quality controlResearch and development

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

宏碁AI助攻璿元無人機!首家資安認證 下月實測敵我識別系統

2023-10-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system for military drones with autonomous capabilities and automatic enemy identification, which is a clear AI system involvement. However, no harm or incident has occurred yet; the article discusses planned testing and capabilities. Given the military application and autonomous nature, there is a credible risk that such AI systems could lead to harm in the future (e.g., misidentification causing injury or escalation). Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

宏碁AI助攻璿元無人機 下月實測敵我識別系統 - 自由財經

2023-10-12
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event involves the development and imminent testing of an AI system integrated into military drones for automatic enemy identification, which is a clear AI system use case. Although no incident of harm has been reported yet, the nature of the AI system's application in military operations plausibly could lead to injury or harm to persons (harm category a) if misidentification or malfunction occurs. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the potential risks and testing of the AI system in a critical application.
Thumbnail Image

宏碁簽 MOU 助璿元 AI 再升級 首家資安認證無人機群將實測 | 產業熱點 | 產業 | 經濟日報

2023-10-12
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI image recognition and autonomous flight AI) integrated into military drones. The event concerns the development and planned testing of these AI-enabled drones, with no indication of actual harm or malfunction causing injury, rights violations, or other harms. Therefore, it does not meet the criteria for an AI Incident. However, given the military application and autonomous capabilities, there is a credible risk that these AI systems could lead to harms in the future, making this an AI Hazard. The article focuses on the collaboration and upcoming tests rather than reporting an incident or harm, so it is not Complementary Information. It is not unrelated because AI systems and their potential impacts are central to the event.
Thumbnail Image

千架「軍規」無人機將納國防戰力 璿元偕宏碁組國家隊下月實測 | 3C科技 | 生活 | NOWnews今日新聞

2023-10-12
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (AI image recognition, autonomous flight control) integrated into military drones, which are explicitly described as being deployed for defense and battlefield operations. The AI's role in automatic enemy identification and autonomous flight directly supports military decision-making and combat activities, which inherently carry risks of harm to persons and communities. Since the article discusses imminent testing and deployment, the AI system's involvement is active and not hypothetical. Therefore, this qualifies as an AI Incident due to the direct link between AI use and potential harm in a military context.
Thumbnail Image

宏碁簽MOU助璿元AI再升級 首家資安認證無人機群將實測 | 科技 | Newtalk新聞

2023-10-12
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into drones for autonomous flight and automatic enemy object recognition, which are AI systems by definition. The use is in a military context with potential for significant harm if misused or malfunctioning. However, no actual harm or incident has occurred yet; the article focuses on development, testing, and certification. The potential for harm is credible given the military application and autonomous capabilities, so it fits the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI systems with plausible future harm.
Thumbnail Image

國防部將採購軍用商規無人機 宏碁簽MOU助璿元AI再升級 | yam News

2023-10-13
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI-enabled military drones with autonomous capabilities for reconnaissance and target identification. Given the military application, the AI systems' use could plausibly lead to injury or harm to persons (harm category a) and disruption of critical infrastructure or military operations (harm category b). The article discusses the AI system's role in enhancing battlefield capabilities, which inherently carries risks of harm in conflict situations. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident, even though no actual harm has yet been reported.
Thumbnail Image

宏碁AI助攻璿元無人機!首家資安認證 下月實測敵我識別系統 | 科技 | 三立新聞網 SETN.COM

2023-10-13
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as performing autonomous image recognition and enemy-friend identification on military drones. The AI system's use is in development and testing phases, with no reported harm or malfunction yet. However, the nature of the AI system—military autonomous drones capable of enemy identification and autonomous flight—poses credible risks of harm in future deployment scenarios. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to injury or other significant harms, but no direct or indirect harm has yet occurred.