Project Nimbus: AI Cloud Deal with Israel Sparks Human Rights Outcry

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google and Amazon’s $1.2 billion Project Nimbus cloud computing contract supplies AI tools to Israel’s government and military. Internal Google documents show fears the technology could facilitate human rights violations in the West Bank, prompting employee protests and criticism from human rights groups over reduced oversight under bespoke terms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Google Cloud's AI and machine learning tools) being developed and used under a government contract. The AI system's use is linked to potential and plausible human rights violations, which constitute a violation of fundamental rights under the framework. The internal documents and employee protests indicate that the AI system's deployment has already raised serious concerns about harm, and the contract's ongoing use implies that these harms are either occurring or highly plausible. Therefore, this qualifies as an AI Incident due to the direct or indirect link between the AI system's use and human rights violations. The event is not merely a future risk (hazard) or complementary information; it details actual concerns and controversies around the AI system's role in harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomyRobustness & digital securitySafetyHuman wellbeing

Industries
Government, security, and defenceIT infrastructure and hostingDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI incident

Business function:
ICT management and information securityMonitoring and quality controlCompliance and justice

AI system task:
Recognition/object detectionEvent/anomaly detectionForecasting/predictionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Google Worried Israeli Contract Could Enable Human Rights Violations

2024-12-03
The New York Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google Cloud's AI and machine learning tools) being developed and used under a government contract. The AI system's use is linked to potential and plausible human rights violations, which constitute a violation of fundamental rights under the framework. The internal documents and employee protests indicate that the AI system's deployment has already raised serious concerns about harm, and the contract's ongoing use implies that these harms are either occurring or highly plausible. Therefore, this qualifies as an AI Incident due to the direct or indirect link between the AI system's use and human rights violations. The event is not merely a future risk (hazard) or complementary information; it details actual concerns and controversies around the AI system's role in harm.
Thumbnail Image

Israel contract: How Google may have been warned about one of its 'controversial' project - Times of India

2024-12-03
The Times of India
Why's our monitor labelling this an incident or hazard?
The article describes internal warnings within Google about the potential for their AI cloud services to be used in ways that could facilitate human rights violations, which fits the definition of an AI Hazard—an event where AI system use could plausibly lead to harm. There is no evidence in the article of actual harm occurring or direct causation of harm by the AI system, so it does not meet the criteria for an AI Incident. The employee protests and company responses further indicate concern about potential misuse rather than documented harm. Thus, the event is best classified as an AI Hazard.
Thumbnail Image

Google said worried contract with Israel could damage its reputation

2024-12-03
The Times of Israel
Why's our monitor labelling this an incident or hazard?
The article describes a situation where AI systems (Google Cloud services with AI capabilities) are being used by the Israeli government and military, with internal Google concerns and employee protests about potential human rights violations facilitated by these systems. While there is no confirmed direct harm caused by the AI systems as per the article, the concerns and protests indicate a plausible risk of harm to human rights and communities. The involvement of AI in potentially facilitating human rights violations and the reputational damage to Google are central to the report. Given the credible risk and internal warnings about facilitation of human rights violations, this qualifies as an AI Hazard rather than an AI Incident, since no direct or confirmed harm caused by the AI system is documented in the article.
Thumbnail Image

Documents Contradict Google's Claims About Its Project Nimbus Contract With Israel

2024-12-03
The Intercept
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (cloud computing and AI tools) provided to the Israeli government, including its military. The contract's "Adjusted Terms of Service" potentially allow uses that could violate human rights, and the Israeli government's actions under investigation for crimes against humanity heighten the risk of harm. Although no specific AI Incident (realized harm) is described, the contract's nature and lack of Google's control create a credible risk of AI-enabled harm, including violations of human rights. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident. It is not Complementary Information because it reveals new, critical information about the contract's terms and risks rather than updates or responses to prior incidents. It is not Unrelated because AI systems are central to the event and its risks.
Thumbnail Image

Project Nimbus: Google and Amazon's Role in Israeli AI Surveillance Under Fire - WinBuzzer

2024-12-03
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and cloud services provided to the Israeli government and military, which are alleged to be used in operations causing harm to Palestinians, including killings and segregation, violating human rights and international law. The AI systems' development and use in this context have directly or indirectly led to significant harm, fulfilling the criteria for an AI Incident. The involvement of AI in surveillance and military targeting, combined with credible accusations from human rights organizations and employee protests, supports this classification.
Thumbnail Image

Google Worried Israeli Contract Could Enable Human Rights Violations

2024-12-03
DNyuz
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (cloud computing with AI tools for image and video analysis) provided to government and military customers. The use of these AI systems is linked to potential and indirect human rights violations, a recognized harm under the framework. The internal documents and employee protests indicate that the AI system's deployment has already led to reputational harm and ethical concerns, with plausible indirect links to human rights abuses. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to violations of human rights or breaches of obligations intended to protect fundamental rights. The event is not merely a potential hazard or complementary information but concerns actual use and associated harms.