Deployment of AI-Enhanced Combat Systems in Military Aircraft Raises Future Risk Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Lockheed Martin has equipped F-35 and F-16 fighter jets with advanced AI-assisted identification and tracking systems, and the US has deployed AI-enabled F-22 and E-4B aircraft to Israel amid rising tensions. While no harm has occurred, these AI military systems present plausible future risks in conflict scenarios.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (the AI-assisted combat identification module) in a military context. While the article describes the deployment and testing of this AI system, it does not report any actual harm or incidents resulting from its use. There is no mention of injury, damage, rights violations, or other harms caused by the AI system. However, given the military application and the potential for this AI system to influence combat decisions, there is a plausible risk that its malfunction or misuse could lead to harm in the future. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm, but no harm has yet been reported or realized.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
WorkersGeneral public

Harm types
Other

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

F-35裝AI了 啟用「戰鬥辨識」快速掃描分析戰場 - 自由軍武頻道

2026-02-26
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the AI-assisted combat identification module) in a military context. While the article describes the deployment and testing of this AI system, it does not report any actual harm or incidents resulting from its use. There is no mention of injury, damage, rights violations, or other harms caused by the AI system. However, given the military application and the potential for this AI system to influence combat decisions, there is a plausible risk that its malfunction or misuse could lead to harm in the future. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm, but no harm has yet been reported or realized.
Thumbnail Image

F-35配備AI「戰鬥辨識」 開啟空戰新頁

2026-02-25
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the AI-assisted Combat ID) in a military context, which is explicitly described. There is no indication that any harm has yet occurred due to this AI system's use; the article focuses on testing and capability enhancement. Given the nature of AI in weapon systems, there is a credible potential for future harm (e.g., misidentification leading to unintended engagements). Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no incident has yet materialized.
Thumbnail Image

強化台F-16戰力 洛馬人員談IRST21紅外追蹤莢艙 | 政治 | 中央社 CNA

2026-02-25
Central News Agency
Why's our monitor labelling this an incident or hazard?
The IRST21 system is an AI-enabled sensor system used for military surveillance and targeting. Its deployment enhances Taiwan's defense capabilities against potential threats, which implies a plausible future risk of harm in the context of military conflict. However, the article does not describe any actual harm, malfunction, or misuse resulting from the system's use. It mainly reports on the contract award and technical capabilities, which is informative about AI's role in military systems but does not constitute an incident or complementary information about harm or governance responses. Hence, it fits the definition of an AI Hazard, as the system's use could plausibly lead to harm in future conflict scenarios.
Thumbnail Image

(影)12架F-22猛禽進駐以色列! 美「末日戰機」起飛、百架加油機待命 | 國際 | Newtalk新聞

2026-02-25
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (F-22 stealth fighters and E-4B command planes) whose deployment is described in a context of escalating military tensions. No actual harm or incident caused by these AI systems is reported, but the deployment and readiness of these AI-enabled systems in a conflict zone plausibly could lead to harm in the future. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to injury, disruption, or harm if conflict occurs. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated since the AI system involvement and plausible future harm are central to the event.
Thumbnail Image

20年後還要人開戰機?歐洲下一代戰機爆路線之爭 空巴執行長表態 - 自由軍武頻道

2026-02-25
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of unmanned drones and integrated combat networks, which are part of the FCAS program. However, the article primarily covers strategic debates, development plans, and future projections without any direct or indirect harm occurring or imminent. There is no mention of malfunction, misuse, or harm caused by these AI systems. Therefore, this is a discussion of potential future AI applications and strategic planning, which fits the definition of Complementary Information as it provides context and updates on AI-related defense developments without reporting an incident or hazard.
Thumbnail Image

飛到一半換人 安杜里爾無人機飛行時成功更換AI駕駛 - 自由軍武頻道

2026-02-26
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (two different AI mission software systems controlling the drone autonomously). However, the article describes a successful test flight without any reported harm, malfunction, or incident causing injury, rights violations, or other harms. The event demonstrates a technological capability that could plausibly lead to future AI-related risks in military contexts, but no harm has occurred yet. Therefore, it qualifies as an AI Hazard due to the plausible future risk associated with autonomous military drones and AI mission software, but not an AI Incident or Complementary Information.
Thumbnail Image

靠「熱能」反制匿蹤戰機,台灣 F-16 將裝備 IRST21 軍團莢艙

2026-02-26
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The IRST21 system is an AI-enabled infrared search and tracking sensor that processes thermal data to detect and track aircraft, including stealth targets. Its use involves AI for real-time data analysis and target identification, which qualifies it as an AI system. However, the article does not report any actual harm or incident caused by the system's development or use; rather, it describes a planned military capability enhancement. There is no indication of injury, rights violations, or other harms occurring or having occurred. The article implies a strategic military advantage and potential future use in conflict scenarios, but no direct or indirect harm has yet materialized. Therefore, this event represents a plausible future risk context related to AI-enabled military technology but does not describe an incident or immediate hazard. It is best classified as Complementary Information, providing context on AI system deployment and strategic implications without reporting harm or imminent risk.
Thumbnail Image

比利時將在安特衛普港部署防空系統,應對無人機威脅

2026-02-26
Gamereactor China
Why's our monitor labelling this an incident or hazard?
The drones interfering with critical infrastructure likely involve AI systems for autonomous or semi-autonomous operation, posing a credible risk of harm to public safety and critical infrastructure. The deployment of a defense system is a response to this threat. Since no direct harm has been reported yet, but the threat is credible and plausible, this event fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news but concerns a specific plausible risk related to AI-enabled drones.