Global Surge in AI-Enabled Drone Deployment Prompts Security and Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-enabled drones are reshaping conflict and policing globally. Israel’s military employs autonomous unmanned vehicles and Hermes drones for surveillance and precision strikes in Gaza, while Iranian Shahed loitering munitions attacked Eilat. Meanwhile, Denver PD’s planned drone response to 911 prompts privacy debates, and India-US talks progress on $3.9B MQ-9B acquisition.[AI generated]

Why's our monitor labelling this an incident or hazard?

The MQ-9B drones are AI systems due to their autonomous or semi-autonomous operational capabilities. The article focuses on negotiations and technology sharing for their acquisition, with no mention of any harm or incident caused by these drones so far. However, the military use of such AI-enabled drones carries plausible risks of harm in the future, including injury, disruption, or violations of rights. Since the event concerns ongoing talks and potential future deployment without realized harm, it fits the definition of an AI Hazard.[AI generated]
AI principles
AccountabilityHuman wellbeingPrivacy & data governanceRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

India-US discussions underway for $3.9 billion MQ-9B drones acquisition

2024-06-02
Economic Times
Why's our monitor labelling this an incident or hazard?
The MQ-9B drones are AI systems due to their autonomous or semi-autonomous operational capabilities. The article focuses on negotiations and technology sharing for their acquisition, with no mention of any harm or incident caused by these drones so far. However, the military use of such AI-enabled drones carries plausible risks of harm in the future, including injury, disruption, or violations of rights. Since the event concerns ongoing talks and potential future deployment without realized harm, it fits the definition of an AI Hazard.
Thumbnail Image

Why 2024 is the 'year of the drone' after the Hamas drone attack on...

2024-06-01
New York Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems embedded in kamikaze drones that have directly led to harm, including attacks on soldiers, bases, and civilian infrastructure. The drones' autonomous or semi-autonomous operation, their deployment in combat causing injury and damage, and the discussion of their evasion of defense systems confirm AI system involvement and realized harm. This meets the definition of an AI Incident as the AI system's use has directly led to injury and harm to persons and harm to property and communities.
Thumbnail Image

How does IDF innovation give it the edge in war with Hamas?

2024-05-30
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled drones and loitering munitions used by the IDF in active combat against Hamas. These systems provide real-time intelligence, target identification, and precision strikes, which directly contribute to harm in the conflict. The AI systems are integral to military operations that result in injury, death, and disruption, fulfilling the criteria for an AI Incident. Although the article focuses on technological innovation, the context is an active war zone where harm is occurring, and the AI systems are pivotal in causing that harm.
Thumbnail Image

Denver to deploy DRONES to respond to 911 calls - NaturalNews.com

2024-06-01
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (drones with autonomous or semi-autonomous capabilities) used in law enforcement to respond to emergency calls. Although no direct harm has yet occurred, the article highlights credible concerns about privacy violations, potential misuse, and disproportionate impact on marginalized communities, which are plausible harms under the AI Hazard definition. The AI system's use in surveillance and decision-making about police deployment could plausibly lead to violations of rights and harm to communities. Since no actual harm is reported, and the focus is on potential risks and societal implications, the classification as AI Hazard is appropriate.
Thumbnail Image

The Rise Of Drones In Modern Warfare - What More To Look For?

2024-05-30
Inc42 Media
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it describes drones with AI-embedded sensor suites, autonomous navigation, and swarming algorithms. These qualify as AI systems under the definition. However, the article does not describe any actual harm or incident caused by these AI systems, nor does it report any direct or indirect harm resulting from their use. The discussion is about the development, deployment, and capabilities of AI-enabled drones in military contexts, which could plausibly lead to harm in the future but no specific hazard event or incident is described. Therefore, the event is best classified as an AI Hazard because it outlines the plausible future risks and strategic implications of AI-enabled drones in warfare, without reporting a realized harm or incident.