L3Harris unveils AMORPHOUS autonomous swarm-control platform

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

L3Harris introduced AMORPHOUS, an open-architecture AI platform enabling U.S. and allied forces to command thousands of heterogeneous unmanned assets via decentralized decision-making across multiple domains. Designed for complex military missions, the swarm-control software has undergone prototype testing with the U.S. Army and Defense Innovation Unit, raising potential hazard concerns despite no reported incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves an AI system, as Amorphous is an autonomy software platform coordinating large numbers of autonomous systems. The event concerns the development and use of this AI system for military swarm control. Although the software is not reported to have caused any harm yet, its deployment in military contexts with autonomous capabilities could plausibly lead to harms such as injury, disruption, or violations of rights. Therefore, this event fits the definition of an AI Hazard, as it describes a credible potential for future harm stemming from the AI system's use, but no realized harm or incident is described.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomyPrivacy & data governanceSustainability

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital securityIT infrastructure and hostingMobility and autonomous vehicles

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestEnvironmentalEconomic/PropertyPsychological

Severity
AI hazard

Business function:
Research and developmentICT management and information securityMonitoring and quality control

AI system task:
Goal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

L3Harris unveils Amorphous autonomy software to manage drone swarms

2025-02-10
Yahoo
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system, as Amorphous is an autonomy software platform coordinating large numbers of autonomous systems. The event concerns the development and use of this AI system for military swarm control. Although the software is not reported to have caused any harm yet, its deployment in military contexts with autonomous capabilities could plausibly lead to harms such as injury, disruption, or violations of rights. Therefore, this event fits the definition of an AI Hazard, as it describes a credible potential for future harm stemming from the AI system's use, but no realized harm or incident is described.
Thumbnail Image

Defense company unveils plan to control thousands of drones

2025-02-10
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (AMORPHOUS) that enables autonomous control of thousands of drones, which is a clear AI system by definition. The system is under development and not yet deployed, so no direct harm has occurred. However, the intended military use of autonomous drone swarms, combined with recent real-world drone attacks causing casualties, indicates a credible risk of future harm. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to injury, death, or other significant harms. There is no indication of an incident or complementary information about mitigation or governance responses, so AI Hazard is the appropriate classification.
Thumbnail Image

L3Harris Launches New Technology to Control Autonomous Swarms

2025-02-10
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (autonomous unmanned swarms with decentralized decision-making). While such systems have a high potential for misuse or harm, the article does not describe any actual harm or incident caused by the system. The content is primarily about the launch and capabilities of the technology, with forward-looking statements about its future use. Therefore, it fits the definition of an AI Hazard, as the system could plausibly lead to harm in the future given its military autonomous capabilities, but no harm has yet occurred or been reported.
Thumbnail Image

L3Harris Unveils Software to Manage Swarm of Autonomous Platforms

2025-02-12
The Defense Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as enabling autonomous, decentralized decision-making for swarms of unmanned platforms with military applications, including precision strikes and electronic warfare. Although no harm or malfunction is reported, the system's capabilities and intended use imply a credible risk of future harm, such as injury, disruption of critical infrastructure, or violations of human rights. The article focuses on the system's development and demonstration rather than any incident of harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the report.
Thumbnail Image

L3Harris Unveils Amorphous Software for Drone Swarms

2025-02-11
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Amorphous) that autonomously controls large numbers of unmanned systems (drones) with leaderless swarm coordination, which fits the definition of an AI system. The system is currently in development and testing phases with the US Department of Defense, aiming to field thousands of autonomous platforms. Although no harm has yet occurred, the nature of the system and its intended military use plausibly could lead to harms such as injury, disruption, or violations of rights if deployed. Since the article does not report any realized harm or incident, but highlights the potential for significant future harm, the event is best classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the new system's capabilities and potential impact rather than responses or updates to past incidents.
Thumbnail Image

L3Harris Launches New Technology to Control Autonomous Swarms

2025-02-10
L3Harris® Fast. Forward.
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (AMORPHOUS) that controls autonomous unmanned swarms with decentralized decision-making capabilities, which fits the definition of an AI system. There is no mention of any realized harm or incident caused by the system yet, so it is not an AI Incident. However, given the military application and the autonomous nature of the system, there is a credible risk that its use or malfunction could lead to injury, disruption, or other harms in the future. Thus, it qualifies as an AI Hazard. The article is not merely general AI news or a product launch without risk, because the system's capabilities and intended use imply plausible future harm. It is not Complementary Information since it does not update or respond to a prior incident or hazard.
Thumbnail Image

L3Harris Launches Software to Control Multiple Autonomous Assets - ExecutiveBiz

2025-02-11
ExecutiveBiz
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (AMORPHOUS) that controls multiple autonomous unmanned assets with decentralized decision-making capabilities, fitting the definition of an AI system. There is no mention of any realized harm or incident caused by the system's development or use, so it is not an AI Incident. However, given the military application and autonomous capabilities, there is a plausible risk that the system could lead to harms such as injury, disruption, or rights violations in the future. Hence, the event is best classified as an AI Hazard.
Thumbnail Image

L3Harris launches new tech to control autonomous swarms

2025-02-13
news.satnews.com
Why's our monitor labelling this an incident or hazard?
The software AMORPHOUS is an AI system enabling autonomous control of military unmanned swarms, which fits the definition of an AI system. The event concerns its development and testing, with no indication of harm having occurred yet. Given the military application and autonomous capabilities, there is a plausible risk that its use could lead to harms such as injury or violations of rights in the future. Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.