Johns Hopkins APL Unveils VIPR: AI Virtual Co-pilot for Combat Aircraft

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Johns Hopkins Applied Physics Laboratory developed VIPR, a Virtual Intelligent Peer-Reasoning AI co-pilot for combat aircraft. Unlike autonomous drones like the XQ-58A Valkyrie, VIPR enhances human pilots’ situational awareness, decision-making, and cognitive load by acting as conscious wingman, high-performance backup, and intelligent support in simulations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (VIPR) developed to assist combat pilots, fulfilling the definition of an AI system. There is no indication that the system has caused any injury, violation of rights, or other harms yet, so it is not an AI Incident. The system is still in the prototype and testing phase, with promising but preliminary results, so it is not merely complementary information about governance or responses. Given the high-stakes application in combat aviation, the AI system's future malfunction or misuse could plausibly lead to serious harm, meeting the criteria for an AI Hazard.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyPrivacy & data governanceHuman wellbeing

Industries
Government, security, and defenceMobility and autonomous vehiclesRobots, sensors, and IT hardwareDigital security

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI hazard

Business function:
Research and development

AI system task:
Interaction support/chatbotsReasoning with knowledge structures/planningRecognition/object detectionGoal-driven organisationEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Toma forma en EEUU un R2D2 para volar junto a pilotos de combate

2024-06-20
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (VIPR) developed to assist combat pilots, fulfilling the definition of an AI system. There is no indication that the system has caused any injury, violation of rights, or other harms yet, so it is not an AI Incident. The system is still in the prototype and testing phase, with promising but preliminary results, so it is not merely complementary information about governance or responses. Given the high-stakes application in combat aviation, the AI system's future malfunction or misuse could plausibly lead to serious harm, meeting the criteria for an AI Hazard.
Thumbnail Image

El sueño de todo fan de Star Wars: crean un copiloto virtual para combate aéreo a lo RD-2D

2024-06-21
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (VIPR) designed for combat aviation to assist human pilots. While the system is still in development and testing phases, no direct or indirect harm has occurred yet. However, given the military context and the AI's role in decision-making during combat, there is a credible risk that its deployment could lead to injury or harm to persons or other significant harms in the future. Thus, the event qualifies as an AI Hazard rather than an AI Incident or Complementary Information, as it highlights a plausible future harm scenario rather than a realized harm or a response to past incidents.
Thumbnail Image

Un R2D2 para volar con pilotos de combate toma forma en EEUU

2024-06-20
europa press
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a virtual intelligent copilot for combat aircraft. However, the article does not report any actual harm, injury, or violation caused by the AI system. Instead, it focuses on the development, capabilities, and promising initial simulation results of the system. There is no indication that the AI system has malfunctioned or caused any incident. While the system could plausibly lead to harm if deployed without sufficient testing or safeguards, the article does not present this as an immediate or credible risk but rather as ongoing research and development. Therefore, the event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI development in a critical domain.
Thumbnail Image

Un R2-D2 de 'Star Wars' para pilotos de combate toma forma en Estados Unidos

2024-06-20
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of an AI system with significant potential impact on human safety and military operations. Although no harm has occurred yet, the AI system's role in piloting combat aircraft and managing autonomous drones presents a credible risk of future harm, such as injury or disruption in critical infrastructure. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential risks are clearly described.
Thumbnail Image

Un R2D2 para volar con pilotos de combate toma forma en EEUU

2024-06-20
Notimérica
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (VIPR) developed to assist combat pilots, which fits the definition of an AI system. The system is still in development and testing phases, with no reported incidents of harm or malfunction. Since no harm has occurred yet, it cannot be classified as an AI Incident. However, the AI system's role in critical military aviation tasks implies a credible risk of future harm if the system malfunctions or is misused. This aligns with the definition of an AI Hazard, as the AI system's use could plausibly lead to injury or disruption of critical infrastructure (military operations). The article does not focus on responses, governance, or updates to past incidents, so it is not Complementary Information. It is not unrelated because it clearly involves an AI system with potential safety implications.
Thumbnail Image

Toma forma en EEUU un R2D2 para volar junto a pilotos de combate

2024-06-20
Notimérica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a virtual intelligent copilot for combat aircraft. However, the article only reports on development and simulation testing phases without any indication of actual harm or incidents caused by the AI system. There is no mention of injury, operational failure, rights violations, or other harms. The system's role is supportive and experimental at this stage, with no realized or imminent harm described. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on AI development in a critical domain, helping stakeholders understand potential future capabilities and implications without reporting any harm or credible risk of harm yet.
Thumbnail Image

Un R2D2 para volar con pilotos de combate toma forma en EEUU - El Diario - Bolivia

2024-06-23
www.eldiario.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as assisting human pilots in combat aircraft. However, there is no indication that the AI system has caused or contributed to any harm or incident. The article focuses on the development and intended use of the AI system, highlighting its capabilities and potential benefits. Since no harm has occurred and the AI system's use could plausibly lead to future harm if misused or malfunctioning, but this is not stated or implied as imminent, the event is best classified as Complementary Information, providing context and insight into AI developments in military aviation.
Thumbnail Image

Científicos trabajan en un "Arturito" para que vuele junto a los pilotos de combate

2024-06-21
BioBioChile
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (VIPR) designed for use in combat aircraft, which qualifies as an AI system under the definitions. However, the event is about the development and testing phase, with no reported harm or malfunction. Since no harm has occurred yet, but the system's use in combat aviation could plausibly lead to harm in the future (e.g., if the AI malfunctions or is misused), this fits the definition of an AI Hazard. There is no indication of an incident or complementary information about responses to harm. Therefore, the classification is AI Hazard.