
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The U.S. Department of Defense has signed agreements with seven major tech companies—including Google, Microsoft, Amazon, Nvidia, OpenAI, SpaceX, and Reflection—to use their AI technologies in secret military operations, such as mission planning and weapons targeting. The exclusion of Anthropic, due to ethical disputes, highlights ongoing concerns about AI's role in warfare and potential risks to civilians.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military operations, including weapon targeting and mission planning, which clearly involves AI system use. While no direct or indirect harm has been reported yet, the deployment of AI in autonomous or semi-autonomous weapons and surveillance systems carries a plausible risk of causing harm to persons, communities, or violating human rights in the future. The article also mentions controversy over the use of AI tools for surveillance and autonomous killing, underscoring the potential for harm. Since no actual harm or incident is described, but the potential for harm is credible and significant, the classification is AI Hazard.[AI generated]