
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Google signed a confidential agreement allowing the U.S. Department of Defense to use its AI technology for classified projects. Over 560 Google employees, including senior staff, protested, urging CEO Sundar Pichai to reject military use of AI due to risks of autonomous weapons and mass surveillance.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Google's AI models by the U.S. Department of Defense in classified projects, indicating AI system involvement. Although no direct harm is reported, the military use of AI systems, especially in classified contexts, plausibly leads to significant harms, including potential violations of human rights or other serious consequences. The employee opposition highlights ethical concerns and the controversial nature of this cooperation. Since the harm is not realized but plausibly could occur, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]