Intelligent Warfare Theory and AI-Driven Information Warfare

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The articles explore future trends in AI-enabled warfare, highlighting shifts toward algorithm-driven combat and preemptive tactical design. They also discuss the threat of AI-powered disinformation, including deepfakes, used in information warfare by authoritarian states, urging democratic nations to develop effective counterstrategies.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of unmanned drones for reconnaissance and target guidance in a military strike that caused destruction of military vehicles and casualties among soldiers. The drones' autonomous or semi-autonomous functions in identifying and confirming targets and guiding missile strikes fit the definition of AI systems influencing physical environments. The resulting harm to personnel and property meets the criteria for an AI Incident under the OECD framework, as the AI system's use directly led to injury and harm in a conflict setting.[AI generated]
AI principles
Democracy & human autonomyRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityAccountabilityPrivacy & data governanceHuman wellbeing

Industries
Government, security, and defenceDigital securityMedia, social platforms, and marketingRobots, sensors, and IT hardware

Affected stakeholders
General publicGovernmentCivil society

Harm types
Physical (death)Physical (injury)Public interestHuman or fundamental rightsReputationalPsychologicalEconomic/Property

Severity
AI incident

Business function:
ICT management and information securityMarketing and advertisementResearch and developmentMonitoring and quality control

AI system task:
Content generationGoal-driven organisationForecasting/predictionRecognition/object detectionOrganisation/recommendersReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

俄军导弹突袭得手,军功章有无人机的一半

2025-02-21
chinanews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of unmanned drones for reconnaissance and target guidance in a military strike that caused destruction of military vehicles and casualties among soldiers. The drones' autonomous or semi-autonomous functions in identifying and confirming targets and guiding missile strikes fit the definition of AI systems influencing physical environments. The resulting harm to personnel and property meets the criteria for an AI Incident under the OECD framework, as the AI system's use directly led to injury and harm in a conflict setting.
Thumbnail Image

八一评论丨加快建设新型军事人才队伍

2025-02-18
81.cn
Why's our monitor labelling this an incident or hazard?
The article centers on military talent development in the context of increasing AI and intelligent technology integration in warfare. It highlights the importance of human factors in AI-enabled military systems and the strategic approach to cultivating such talent. There is no mention of any realized harm, incident, or plausible future harm caused by AI systems. The content is primarily about governance, strategy, and capacity building, which fits the definition of Complementary Information as it provides context and policy response related to AI in the military domain without reporting a new incident or hazard.
Thumbnail Image

军事论坛丨浅析网电新质战斗力生成机理

2025-02-20
81.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI and intelligent systems as key enablers in the development of new net-electronic combat power, indicating the involvement of AI systems. However, it is a conceptual and analytical piece focusing on the mechanisms and strategies for generating and cultivating such combat power. There is no description of any actual incident, malfunction, or misuse leading to harm, nor a specific credible risk event. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It also does not primarily report on responses, updates, or governance measures related to AI incidents or hazards. Hence, it is best classified as Complementary Information, providing contextual and strategic insights into AI's role in military capabilities.
Thumbnail Image

英法密集研发无人作战装备

2025-02-21
军事-人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled unmanned combat systems being developed and tested for military use, which qualifies as AI systems. There is no report of actual harm or incidents caused by these systems yet, but their intended use in warfare and autonomous operation plausibly could lead to injury, disruption, or other harms. The development and deployment of such systems with lethal capabilities is widely recognized as a potential AI Hazard. Hence, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

俄军投放3吨级炸弹轰炸俄境内乌军 新型作战模式显威力

2025-02-20
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-assisted unmanned drone swarms for reconnaissance and battlefield assessment, which improved strike efficiency and contributed to the destruction of enemy defenses and casualties. The harms include injury and death of personnel (including foreign fighters) and destruction of military infrastructure, which fall under harm to persons and property. The AI system's role is pivotal in enhancing the effectiveness of the attacks, thus directly leading to these harms. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

学者:俄乌战争的天平正在倾斜 现代技术重塑战场

2025-02-20
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly or implicitly through the use of autonomous or semi-autonomous drones and electronic warfare technologies that rely on AI for reconnaissance, targeting, and strike capabilities. The use of these AI systems has directly led to harm (loss of life and military defeat), which qualifies as injury or harm to groups of people. Therefore, this is an AI Incident due to the direct involvement of AI-enabled military technologies causing harm in an armed conflict.
Thumbnail Image

2025车市风云:智驾战来袭,价格战落幕?-汽车频道-和讯网

2025-02-19
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of advanced intelligent driving systems (e.g., BYD's 'Tianshen Eye' system) being developed and deployed in vehicles. The focus is on the strategic shift in the automotive industry towards AI-enabled smart driving technologies and their market penetration. There is no mention of any harm, malfunction, or misuse resulting from these AI systems, nor any credible warning of plausible future harm. The article mainly provides an overview of industry developments, competitive strategies, and technological trends related to AI in automotive smart driving. This fits the definition of Complementary Information, as it enhances understanding of AI system deployment and ecosystem evolution without describing a new AI Incident or AI Hazard.
Thumbnail Image

俄媒文章:俄乌冲突近三周年,俄军发生巨变

2025-02-21
新浪军事频道
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of unmanned drones (FPV drones, attack drones, long-range drones) which are AI systems by definition due to their autonomous or semi-autonomous capabilities in navigation, targeting, and attack. The drones and other AI-enabled weapons have been actively used in combat, causing destruction of enemy fortifications and impacting military tactics. This constitutes direct harm to property, communities, and likely human life, fulfilling the criteria for an AI Incident. The article does not merely speculate about potential harm but reports ongoing and realized military impacts involving AI systems.
Thumbnail Image

智能化战争面面观丨浅析智能化作战理论创新

2025-02-18
新浪军事频道
Why's our monitor labelling this an incident or hazard?
The content centers on the theoretical and strategic implications of AI in warfare, emphasizing future developments and innovation rather than any actual incident or hazard. There is no mention of any direct or indirect harm caused by AI systems, nor any specific event that could plausibly lead to harm. The article serves as complementary information by providing context and insight into the evolving role of AI in military operations and the associated theoretical innovations. Therefore, it fits the category of Complementary Information rather than AI Incident or AI Hazard.
Thumbnail Image

俄乌冲突近三周年,俄军发生巨变

2025-02-21
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of unmanned drones (FPV drones and other attack drones) by the Russian military in active combat, which are AI systems due to their autonomous or semi-autonomous capabilities in navigation, targeting, and attack. The use of these AI-enabled drones has directly influenced battlefield outcomes and tactics, including causing harm to enemy forces and infrastructure. This constitutes direct harm caused by AI systems in a military conflict context, fitting the definition of an AI Incident as the AI system's use has directly led to harm in a conflict setting. Therefore, this event is classified as an AI Incident.