Salesforce Launches Missionforce AI Unit for U.S. National Security

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Salesforce has launched Missionforce, a new business unit focused on integrating AI, data, and cloud technologies into U.S. national security operations. Led by Kendall Collins, Missionforce aims to modernize defense workflows in personnel, logistics, and decision-making, raising potential future risks associated with AI deployment in critical infrastructure.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the creation of an AI-focused business unit aimed at national security applications, which involves AI systems in critical areas like personnel, logistics, and decision-making. Although no harm or incident is reported, the deployment of AI in defense workflows inherently carries plausible risks of harm (e.g., misuse, malfunction, or unintended consequences affecting national security or human rights). Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated since it directly concerns AI system development and use with potential risks.[AI generated]
AI principles
AccountabilityRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Public interest

Severity
AI hazard

Business function:
Human resource managementLogisticsPlanning and budgeting

AI system task:
Goal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Salesforce launches 'Missonforce,' a national security-focused business unit | TechCrunch

2025-09-16
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation of an AI-focused business unit aimed at national security applications, which involves AI systems in critical areas like personnel, logistics, and decision-making. Although no harm or incident is reported, the deployment of AI in defense workflows inherently carries plausible risks of harm (e.g., misuse, malfunction, or unintended consequences affecting national security or human rights). Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated since it directly concerns AI system development and use with potential risks.
Thumbnail Image

Salesforce launches new business unit focused on US national security

2025-09-16
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event describes the creation of an AI-focused business unit aimed at applying AI to defense workflows, which are critical infrastructure and national security areas. However, the article does not report any realized harm or incidents resulting from this AI deployment. Instead, it describes a strategic move to develop and use AI in national security. Given the potential for AI in defense to lead to significant harms if misused or malfunctioning, this event plausibly could lead to AI-related harms in the future. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Salesforce launches 'Missonforce,' a national security-focused business unit

2025-09-16
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems in national security, which could plausibly lead to significant impacts or risks in the future. However, there is no indication that any harm, malfunction, or violation has occurred yet. Therefore, this constitutes an AI Hazard, as the deployment of AI in defense workflows could plausibly lead to incidents involving harm or rights violations, but no such incidents are reported at this time.
Thumbnail Image

Salesforce (NYSE:CRM) Launches Missionforce to Power U.S. National Security with AI - TipRanks.com

2025-09-16
TipRanks Financial
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems, including agentic AI capable of autonomous action, being developed and deployed in national security and defense operations. While no harm or incident is reported, the nature of these AI applications in critical infrastructure and defense implies a credible risk of future harm, such as operational failures, misuse, or unintended consequences. Since the event concerns the launch and intended use of AI systems with significant potential impact but no current harm, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Salesforce Launches Missionforce to Boost National Security

2025-09-17
ExecutiveBiz
Why's our monitor labelling this an incident or hazard?
The article describes the launch of an AI-enabled product suite for national security applications, which involves AI system use. However, there is no indication of any harm, malfunction, or misuse resulting from these AI systems. The event is about the introduction of AI technology with potential future impacts but does not describe any incident or hazard. Therefore, it is best classified as Complementary Information, as it provides context on AI adoption in a sensitive sector without reporting any realized or plausible harm.
Thumbnail Image

Benioff wants to cash in on defense tech boom with AI bots

2025-09-18
The San Francisco Standard
Why's our monitor labelling this an incident or hazard?
While the article clearly involves AI systems being developed and deployed in defense settings, it does not describe any actual harm or incidents caused by these AI systems. The focus is on the launch and strategic positioning of AI products for defense use, which could plausibly lead to future harms given the nature of AI in military applications, but no specific harm or incident is reported. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm in the future but no harm has yet occurred or been reported.