Eric Schmidt's White Stork develops AI-powered kamikaze drones for Ukraine

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Former Google CEO Eric Schmidt's covert startup White Stork is mass-producing AI-powered kamikaze drones for Ukraine. Equipped with advanced computer vision to identify targets even in GPS-jammed environments, these single-use explosive drones aim to operate autonomously on the battlefield, raising significant risks of harm and ethical concerns over autonomous lethal weapons.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the development of AI-powered attack drones capable of target identification, which involves AI systems. Although no incident of harm has yet occurred, the intended use of these AI systems in autonomous weaponry poses a credible risk of causing serious harm, including injury, violations of human rights, and damage to communities or property. The mere development and planned mass production of such AI-enabled military systems with high potential for misuse qualifies this event as an AI Hazard under the OECD framework.[AI generated]
AI principles
AccountabilityRespect of human rightsSafetyDemocracy & human autonomyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rights

Severity
AI hazard

Business function:
Manufacturing

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Eric Schmidt's Secret 'White Stork' Project Aims To Build AI Combat Drones

2024-01-23
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using artificial intelligence for visual targeting in combat drones, which are designed to operate in hostile environments. The use of such AI-enabled drones in an active conflict zone (Ukraine) implies direct or indirect harm to persons and communities, as these drones are intended as weapons. The article details the development and deployment efforts, indicating the AI system's use rather than just potential future harm. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to harm in a military conflict context.
Thumbnail Image

Ex-Google CEO Eric Schmidt quietly created a company called White Stork, which plans to build AI-powered attack drones, report says

2024-01-24
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development of AI-powered attack drones capable of target identification, which involves AI systems. Although no incident of harm has yet occurred, the intended use of these AI systems in autonomous weaponry poses a credible risk of causing serious harm, including injury, violations of human rights, and damage to communities or property. The mere development and planned mass production of such AI-enabled military systems with high potential for misuse qualifies this event as an AI Hazard under the OECD framework.
Thumbnail Image

Former Google CEO Gets Into the AI-Powered Kamikaze Drone Business With 'White Stork'

2024-01-24
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered kamikaze drones being developed and sold for military use in an active war, with AI used to pinpoint targets and evade defense systems. This clearly involves an AI system whose use leads directly to harm (injury and death) in warfare, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as these drones are actively used in conflict. Therefore, this event qualifies as an AI Incident due to the direct link between AI system use and harm in a military context.
Thumbnail Image

Former Google CEO's New Startup Will Build AI Attack Drones

2024-01-25
ExtremeTech
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development of AI-powered attack drones capable of autonomous target identification and neutralization, which clearly involves AI systems. Although no incident of harm has yet occurred, the autonomous lethal use of AI in military drones poses a credible and significant risk of injury, death, and human rights violations. The event is about the development and intended use of these AI systems, which could plausibly lead to an AI Incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Google's former CEO is building Kamikaze drones for Ukraine

2024-01-26
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the company is building AI-powered Kamikaze drones armed with explosives for use in the Ukraine conflict. These drones are AI systems designed to inflict physical harm, which directly relates to harm to persons (a). The development and use of such autonomous weapons systems clearly fall under the definition of an AI Incident because the AI system's use is directly linked to causing harm in a real-world conflict. The article does not describe a potential or future risk but an ongoing development and deployment, so it is not an AI Hazard. It is not merely complementary information or unrelated news because the AI system's role in causing harm is central to the event described.
Thumbnail Image

Former Google CEO Allegedly Developing Suicide Attack Drones For Ukraine War

2024-01-24
HotHardware
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for visual targeting in suicide drones designed for military use. While no actual incident of harm is reported, the intended use of these AI-powered drones as autonomous weapons in an active conflict zone implies a credible risk of causing injury or death, which fits the definition of an AI Hazard. The development and potential deployment of such lethal autonomous weapons systems is a recognized AI Hazard because of their capacity to cause significant harm. Since no realized harm is described, it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the development and potential use of AI systems with clear risk of harm.
Thumbnail Image

Ex-Google CEO Eric Schmidt quietly created a company called White Stork, which plans to build AI-powered attack drones, report says

2024-01-24
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article details the creation of AI attack drones capable of autonomous target identification, which clearly involves AI systems. Although no incident of harm has occurred, the development and potential deployment of such autonomous weapons pose a credible risk of causing serious harm, including injury and violations of human rights. According to the definitions, the mere development or offering for sale of AI-enabled systems with high potential for misuse, like AI-powered autonomous weapons, qualifies as an AI Hazard. Hence, this event is best classified as an AI Hazard.
Thumbnail Image

Former Google CEO Eric Schmidt jumps into AI attack drones space, looks to transform military tech

2024-01-25
HT Tech
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems—specifically AI-powered attack drones intended for military applications. While no actual harm has been reported yet, the nature of these AI systems and their intended use in combat scenarios create a credible risk of causing injury, death, or other serious harms. This fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident involving harm to people and violation of rights. Therefore, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Ex-Google Chief Pushing to Deploy AI-Driven, Kamikaze Drones in Ukraine

2024-01-25
The Messenger
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for visual targeting in kamikaze drones intended for military use in Ukraine. The AI system's development and intended use in lethal autonomous weapons present a credible and plausible risk of causing injury or death (harm to persons) and harm to property and communities. While no specific incident of harm has yet occurred or been reported, the active development and plans for mass production and deployment in a war zone constitute a plausible future harm scenario. Hence, this is classified as an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information, as it focuses on the potential for significant harm from AI-enabled weapons.
Thumbnail Image

El ex director ejecutivo de Google ingresa al negocio de los drones kamikaze impulsados por inteligencia artificial con White Stork

2024-01-24
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as kamikaze drones using AI for target identification and defense evasion, which are actively deployed in warfare causing harm to people and communities. The involvement of AI in causing injury and harm in an armed conflict meets the definition of an AI Incident. The article details ongoing use and harm, not just potential future risk, so it is not merely a hazard. It is not complementary information since the main focus is on the active deployment and harm caused by these AI systems, nor is it unrelated.
Thumbnail Image

La empresa del antiguo director ejecutivo de Google ya tiene un propósito: construir drones de ataque con IA

2024-01-26
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (autonomous attack drones) that can identify and eliminate targets without human intervention, which clearly fits the definition of an AI system. The article does not report any actual harm caused yet but highlights the potential for significant harm, including injury or death, from these AI-powered weapons. The concerns and agreements to limit such weapons underscore the recognized risk. Since the harm is plausible but not yet realized, this is best classified as an AI Hazard rather than an AI Incident. The article focuses on the development and deployment plans, not on a realized incident or harm, and it is not merely complementary information or unrelated news.
Thumbnail Image

Eric Schmidt ya sabe en qué invertir parte de la fortuna que ganó en Google: drones "kamikaze" con inteligencia artificial

2024-01-26
Xataka
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into military drones designed to autonomously identify targets and operate in contested environments. Although no specific harm has yet occurred from these drones, their intended use as autonomous lethal weapons presents a credible risk of causing injury or death, disruption, and violations of human rights. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the development and deployment of AI-powered kamikaze drones.
Thumbnail Image

Un exCEO de Google crea una empresa que fabricará drones "asesinos" que funcionan con inteligencia artificial

2024-01-25
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in autonomous military drones designed to identify and attack targets, which fits the definition of an AI system. The event concerns the development and intended use of these AI systems, which could plausibly lead to significant harm, including injury, violations of human rights, and other serious consequences. Since no actual harm has been reported yet, but the risk is credible and significant, the event qualifies as an AI Hazard rather than an AI Incident. It is not Complementary Information because it introduces a new primary risk, nor is it Unrelated as it directly involves AI systems with potential for harm.
Thumbnail Image

El proyecto secreto 'White Stork' de Eric Schmidt tiene como objetivo construir drones de combate con IA

2024-01-24
Forbes México
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in combat drones designed to autonomously or semi-autonomously target and operate in contested environments, which directly relates to harm in the form of physical injury or death and disruption of critical infrastructure (military operations). The AI system's development and deployment in an active war zone where harm is occurring or imminent meets the criteria for an AI Incident. The involvement is not speculative or potential but actual and ongoing, with clear links to harm. Hence, it is not merely a hazard or complementary information but an incident.
Thumbnail Image

Ex CEO de Google crea una empresa que fabricará drones "asesinos" con inteligencia artificial

2024-01-27
FayerWayer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems embedded in autonomous lethal drones designed to identify targets and carry out military operations independently. This clearly fits the definition of an AI system. The event concerns the development and intended use of these systems, which could plausibly lead to serious harm (injury or death) and violations of human rights. Since no actual harm is reported yet, but the risk is credible and significant, this qualifies as an AI Hazard. The secretive nature and the involvement of a prominent figure in technology and defense further underscore the potential for future harm. Therefore, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Fue CEO de Google y ahora construye drones de ataque: el nuevo negocio de Eric Schmidt

2024-01-25
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (attack drones with AI for autonomous targeting) that could plausibly lead to significant harm, including injury or death in conflict zones. Although no actual harm is reported yet, the nature of the AI system and its military application present a credible risk of future harm. Therefore, this qualifies as an AI Hazard under the framework, as the AI system's development and intended use could plausibly lead to an AI Incident involving injury or harm to persons or groups.