Russian Wagner Group and Chinese Spies Develop AI Swarm Drones for Ukraine Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Russian Wagner Group, in collaboration with Chinese spies and cyber experts, is developing AI-powered swarm drones capable of coordinated autonomous attacks. Over 2,500 drones have reportedly been shipped from China to Russia for use against Ukrainian military and civilian targets, raising concerns over AI-enabled warfare and civilian harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI directing a swarm network of drones that carry explosive tips and drop bombs on military or civilian targets, causing destruction. This constitutes direct harm to persons and property, fulfilling the criteria for an AI Incident. The involvement of AI in coordinating attacks and espionage in an active conflict zone indicates realized harm rather than just potential risk, thus classifying this as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital securityMobility and autonomous vehicles

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Physical (injury)Public interestHuman or fundamental rightsPsychologicalEconomic/Property

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Wagner group and Chinese spies plot against Ukraine

2023-01-31
EXPRESS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI directing a swarm network of drones that carry explosive tips and drop bombs on military or civilian targets, causing destruction. This constitutes direct harm to persons and property, fulfilling the criteria for an AI Incident. The involvement of AI in coordinating attacks and espionage in an active conflict zone indicates realized harm rather than just potential risk, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Russia's Wagner Group secretly develop 'swarm drones' in deal with Chinese spies

2023-01-30
Mirror
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for autonomous drone swarm coordination, which is explicitly described. The drones are intended for bombing and surveillance in a conflict zone, directly causing or enabling harm to people and communities. The article reports ongoing development and deployment, indicating realized harm potential. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-enabled military applications.
Thumbnail Image

Russia's Wagner Group 'secretly developing' swarm drones with Chinese spies, claims report

2023-01-31
Republic World
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI-enabled autonomous swarm drones capable of coordinated attacks, which directly implicates potential harm to people and property. The AI system's development and use in a military context with covert operations and explosive payloads clearly meet the criteria for an AI Hazard, as the harm is plausible but not yet confirmed or realized. The report's uncertainty about reliability does not negate the credible risk posed by such systems. Since no actual harm is reported as having occurred yet, this is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Russia's 'Killer' Swarm Drone Project: Russian, Chinese Agencies 'Partner' To Attack Ukraine's Military Infra - British Media Claims

2023-02-01
Latest Asian, Middle-East, EurAsian, Indian News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-enabled swarm drones capable of coordinated autonomous attacks, which are being developed and deployed by the Wagner Group against Ukraine. This involves the use of AI systems in a military context leading to direct harm to people and military infrastructure, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in enabling the swarm attacks. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

'Swarm drones developed by Russian mercenaries and Chinese spies'

2023-01-31
Hull Daily Mail
Why's our monitor labelling this an incident or hazard?
The article details the development of AI-powered swarm drones capable of coordinated autonomous attacks and surveillance, which could lead to injury or harm to people and disruption in conflict zones. Although no specific incident of harm is reported, the nature of the technology and its intended use in warfare represent a credible and significant risk of harm. The AI system's development and intended use in autonomous lethal operations fit the definition of an AI Hazard, as it could plausibly lead to an AI Incident involving injury, harm, or violation of rights. Therefore, this event is best classified as an AI Hazard.