Rafael Unveils AI-Enabled Loitering Missiles with Autonomous Targeting

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Israeli defense company Rafael has introduced the L-SPIKE family of loitering munitions, featuring AI-based autonomous target recognition and navigation. The L-SPIKE 4X missile can strike targets up to 40 km away, loiter for 25 minutes, and operate without GPS, raising concerns about the risks of AI-driven lethal weaponry.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article focuses on the development and unveiling of AI-enabled loitering missiles with advanced targeting and loitering capabilities. While no actual harm or incident is reported, the AI system's use in autonomous or semi-autonomous weaponry inherently carries a credible risk of causing injury, death, or destruction in military contexts. Therefore, this event fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident involving harm to persons or property.[AI generated]
AI principles
SafetyAccountabilityRespect of human rightsDemocracy & human autonomyRobustness & digital securityTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

הקרב על השמיים עולה שלב: רפאל חושפת את הדור הבא של הטילים

2025-11-19
חדשות היום, כלכלה, ספורט, רכילות, בריאות, טכנולוגיה | מעריב
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and unveiling of AI-enabled loitering missiles with advanced targeting and loitering capabilities. While no actual harm or incident is reported, the AI system's use in autonomous or semi-autonomous weaponry inherently carries a credible risk of causing injury, death, or destruction in military contexts. Therefore, this event fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident involving harm to persons or property.
Thumbnail Image

לראשונה בישראל: רפאל משיקה טילים משוטטים

2025-11-19
מקור ראשון
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous loitering missiles with AI-based target recognition and navigation). Although no actual harm has yet occurred, the nature of these weapons and their autonomous capabilities plausibly lead to significant harm, including injury or death and destruction of property. The article focuses on the introduction of these AI-enabled weapons, which inherently carry risks of misuse or malfunction leading to harm. Therefore, this qualifies as an AI Hazard under the framework, as it plausibly could lead to an AI Incident in the future.
Thumbnail Image

חדש: טיל משוטט שמגיע ל-40 ק"מ תוך 5 דקות - אמס

2025-11-19
אמס
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous loitering munitions with AI-based target recognition and decision-making). While no actual harm is reported yet, the deployment of such weapons inherently carries a credible risk of causing injury, death, or violations of rights. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to AI Incidents involving significant harm. It is not Complementary Information since it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

בלי GPS ובמהירות שיא: הדור הבא של טילי החץ עם חימושים משוטטים

2025-11-19
כיכר השבת
Why's our monitor labelling this an incident or hazard?
The event involves AI systems embedded in advanced loitering missiles with autonomous target recognition and navigation capabilities. These AI systems are designed for lethal military use, which inherently carries direct risk of injury or death (harm to persons) and harm to communities. The article explicitly mentions AI-based automatic target recognition and navigation without GPS, indicating AI system involvement. The development and deployment of such AI-enabled weapons constitute an AI Hazard due to the plausible future harm, but since these weapons are already developed and presumably operational, and given the context of ongoing military conflicts, the risk of direct harm is immediate and realized. Hence, this qualifies as an AI Incident due to the direct link between AI system use and potential lethal harm.