Anduril Unveils Pulsar: AI-Powered Electronic Warfare System with Autonomous Threat Response

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Anduril Industries has introduced Pulsar, an AI-enabled electronic warfare system capable of autonomously detecting, jamming, and countering electromagnetic threats, including drones. Already deployed with US forces, Pulsar's adaptive AI allows rapid response and data sharing, raising concerns about potential future harm in military and conflict settings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (Pulsar) used for electronic warfare, involving AI-driven real-time threat detection and response. However, there is no indication that the system has caused any injury, disruption, rights violations, or other harms yet. The focus is on the system's capabilities and potential to change warfare dynamics. Since the system's deployment could plausibly lead to harms such as escalation of conflict or unintended consequences in warfare, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system with potential for harm.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rightsDemocracy & human autonomyPrivacy & data governance

Industries
Government, security, and defenceDigital securityMobility and autonomous vehiclesRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Public interestHuman or fundamental rightsEconomic/PropertyPsychologicalReputational

Severity
AI hazard

Business function:
ICT management and information securityMonitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Anduril unveils game-changing electromagnetic warfare system

2024-05-06
Defence Blog
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Pulsar) used for electronic warfare, involving AI-driven real-time threat detection and response. However, there is no indication that the system has caused any injury, disruption, rights violations, or other harms yet. The focus is on the system's capabilities and potential to change warfare dynamics. Since the system's deployment could plausibly lead to harms such as escalation of conflict or unintended consequences in warfare, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system with potential for harm.
Thumbnail Image

Anduril Announces Pulsar Family of AI-Enabled Electromagnetic Warfare Systems

2024-05-06
sUAS News - The Business of Drones
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-enabled electromagnetic warfare systems with autonomous and adaptive capabilities. However, it is an announcement of a product and its capabilities without any indication of actual harm, malfunction, or misuse leading to injury, rights violations, or other harms. The potential for future harm exists given the military nature of the system, but the article does not describe any event or circumstance where such harm has occurred or was narrowly avoided. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context on AI developments and their implications in the defense sector.
Thumbnail Image

Anduril Unveils Pulsar AI-Enabled EW System

2024-05-08
The Defense Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Pulsar) designed for electronic warfare, capable of autonomous threat identification and response. Although no harm or incident has occurred yet, the system's intended military use and capabilities imply a credible risk of harm in future conflict situations. The event concerns the development and introduction of an AI-enabled military system with potential for significant impact, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it involves an AI system with plausible future harm potential.
Thumbnail Image

Anduril Industries launches 'Pulsar' EW systems at SOF Week

2024-05-07
Shephard Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and machine learning in the Pulsar EW system, confirming the presence of an AI system. The system is designed for electronic warfare, including counter-drone operations, which inherently involve risks of harm to persons, property, or critical infrastructure in conflict zones. Although no specific incident or harm is reported, the operational deployment of such AI-enabled systems in military contexts plausibly could lead to AI incidents. Since no actual harm or incident is described, but the potential for harm is credible and inherent, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article is not providing updates or responses to a prior incident, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

Anduril debuts Pulsar AI-powered electronic warfare system

2024-05-06
Breaking Defense
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Pulsar) used in electronic warfare, which can jam devices and respond to threats autonomously. The system is already in use by US forces, indicating active deployment. While the system's capabilities could plausibly lead to harms such as disruption of critical infrastructure or harm in conflict zones, no actual harm or incident is reported. The focus is on the system's introduction and potential impact rather than realized harm or responses to past incidents. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future, but no direct or indirect harm has yet occurred or been reported.
Thumbnail Image

Pulsar, our drone hunter - ITBUSINESS

2024-05-09
newsbeezer.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Pulsar) used in electronic warfare with autonomous and adaptive capabilities. The system is deployed in combat zones and designed to counter threats, implying potential for harm through its use. However, no actual harm or incident is described. The focus is on the system's capabilities and deployment context, indicating a credible risk of future harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its military application are central to the article.