
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Salesforce has launched Missionforce, a new business unit focused on integrating AI, data, and cloud technologies into U.S. national security operations. Led by Kendall Collins, Missionforce aims to modernize defense workflows in personnel, logistics, and decision-making, raising potential future risks associated with AI deployment in critical infrastructure.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation of an AI-focused business unit aimed at national security applications, which involves AI systems in critical areas like personnel, logistics, and decision-making. Although no harm or incident is reported, the deployment of AI in defense workflows inherently carries plausible risks of harm (e.g., misuse, malfunction, or unintended consequences affecting national security or human rights). Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated since it directly concerns AI system development and use with potential risks.[AI generated]