Saab Develops AI-Enabled 'Loyal Wingman' Unmanned Combat Aircraft

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Swedish defense company Saab is developing an AI-driven unmanned combat aircraft, dubbed the 'Loyal Wingman,' designed to operate alongside manned fighter jets. With test flights planned for next year and expected market readiness by 2030, the system raises concerns over potential future AI hazards in military applications.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI-assisted unmanned combat aircraft capable of autonomous operations in a military context. Although no harm has yet occurred, the development and future deployment of such AI-enabled autonomous weapons systems plausibly pose significant risks of harm, including injury or death, violations of human rights, and escalation of conflict. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm stemming from the AI system's intended use in lethal autonomous operations.[AI generated]
AI principles
SafetyAccountabilityRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)PsychologicalHuman or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Research and developmentManufacturingMonitoring and quality control

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Saab har tagit fram ett obemannat stridsflygplan

2025-06-01
Teknikveckan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-assisted unmanned combat aircraft capable of autonomous operations in a military context. Although no harm has yet occurred, the development and future deployment of such AI-enabled autonomous weapons systems plausibly pose significant risks of harm, including injury or death, violations of human rights, and escalation of conflict. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm stemming from the AI system's intended use in lethal autonomous operations.
Thumbnail Image

Saab ska provflyga obemannat stridsflyg

2025-05-30
gp.se
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI-enabled unmanned combat aircraft, which are AI systems by definition due to their autonomous capabilities and AI assistance. The article does not report any realized harm but highlights the potential for these systems to be used in dangerous military operations, which could plausibly lead to harm such as injury, violation of rights, or harm to communities. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Saab ska provflyga obemannat stridsflyg

2025-05-30
Sydsvenskan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in unmanned combat aircraft designed for military operations. While no harm has yet occurred, the development and testing of AI-enabled autonomous combat drones pose a credible risk of future harm, including injury, disruption, or violations of rights due to their military use. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm in the future.
Thumbnail Image

Saab nära ta fram obemannat stridsflyg - testas nästa år

2025-05-30
Omni
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI-enabled unmanned combat aircraft with autonomous capabilities. While no harm has yet occurred, the deployment of such AI-powered military systems could plausibly lead to significant harms, including injury or harm to people, disruption of critical infrastructure, or violations of human rights, given the nature of autonomous weapons. Therefore, this event qualifies as an AI Hazard due to the credible risk posed by the development and future use of AI in lethal autonomous systems.
Thumbnail Image

Saab ska provflyga obemannat stridsflyg

2025-05-30
HD
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI system integrated into unmanned combat aircraft intended for military use. Although no harm has yet occurred, the deployment of AI-powered autonomous weapons capable of striking targets in hostile areas presents a credible risk of significant harm, including injury, loss of life, and escalation of conflict. Therefore, this development constitutes an AI Hazard due to the plausible future harm from the use of AI in lethal autonomous weapon systems.
Thumbnail Image

Saab utvecklar obemannat stridsflyg

2025-06-02
evertiq.se
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-controlled unmanned combat aircraft, which qualifies as an AI system under the framework. The system is under development and not yet deployed, so no direct harm has occurred. However, autonomous weapons systems have a well-recognized potential to cause significant harm, including injury or death, violations of human rights, and broader societal harm. The article highlights the intended use for risky missions in hostile territory, which plausibly could lead to such harms. Thus, this is an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the development and potential capabilities rather than responses or updates to past incidents. It is not Unrelated because the AI system and its potential impacts are central to the report.
Thumbnail Image

Saab utvecklar obemannat stridsflyg - första flygningarna nästa år | Placera.se

2025-06-02
placera.se
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI system integrated into an unmanned combat aircraft designed for military use, which could plausibly lead to significant harm including injury or death, disruption, or violations of human rights due to its combat role. Although no harm has yet occurred, the nature and intended use of the AI system in autonomous weaponry constitute a credible risk of future harm. Therefore, this event qualifies as an AI Hazard under the framework, as it describes the development and planned deployment of an AI-enabled military system with high potential for misuse or harm.