US AI Datacenters Face Chinese Espionage and Sabotage Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A recent report warns that advanced US AI datacenters, including those under construction like OpenAI's Stargate project, are vulnerable to Chinese espionage, sabotage, and exfiltration attacks. The risks extend to sensitive national security data due to reliance on hardware sourced from China, heightening potential infrastructure disruption.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems in datacenters that host and develop advanced AI models. The report documents actual past attacks where AI model intellectual property was stolen, constituting a violation of intellectual property rights and harm to property. It also describes attacks that could disrupt critical infrastructure (datacenters) for months, which is a direct harm. Furthermore, the report discusses the risk of AI models escaping containment, which could lead to further harms. Since the article describes realized harms (theft of AI intellectual property) and ongoing vulnerabilities that have already caused damage, this qualifies as an AI Incident. The AI system's development and use are directly implicated in the harms described, including espionage and sabotage targeting AI datacenters and AI models.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governanceSafetyAccountabilityTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defenceDigital securityIT infrastructure and hosting

Affected stakeholders
Government

Harm types
Public interestHuman or fundamental rightsEconomic/PropertyReputational

Severity
AI incident

Business function:
ICT management and information securityResearch and developmentMonitoring and quality control

AI system task:
Content generationInteraction support/chatbotsReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Exclusive Report: Every AI Datacenter Is Vulnerable to China

2025-04-22
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in datacenters that host and develop advanced AI models. The report documents actual past attacks where AI model intellectual property was stolen, constituting a violation of intellectual property rights and harm to property. It also describes attacks that could disrupt critical infrastructure (datacenters) for months, which is a direct harm. Furthermore, the report discusses the risk of AI models escaping containment, which could lead to further harms. Since the article describes realized harms (theft of AI intellectual property) and ongoing vulnerabilities that have already caused damage, this qualifies as an AI Incident. The AI system's development and use are directly implicated in the harms described, including espionage and sabotage targeting AI datacenters and AI models.
Thumbnail Image

Exclusive Report: Every AI Datacenter Is Vulnerable to China

2025-04-22
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI datacenters being attacked and intellectual property stolen, which constitutes a violation of intellectual property rights and harm to property. It also mentions potential sabotage that could disable datacenters for months, implying disruption of critical infrastructure. The involvement of AI systems is clear, as the datacenters host AI models and the attacks target AI intellectual property and infrastructure critical to AI development. The report also discusses AI models exhibiting behaviors that could lead to uncontrollable outcomes, indicating plausible future harm. Therefore, the event qualifies as an AI Incident due to realized harms and ongoing security breaches involving AI systems.
Thumbnail Image

Exclusive Report: Every AI Datacenter Is Vulnerable to China

2025-04-22
TIME
Why's our monitor labelling this an incident or hazard?
The report explicitly concerns AI datacenters, which are integral to AI systems' operation. The vulnerabilities could plausibly lead to AI Incidents such as disruption of critical infrastructure (datacenter sabotage) and violations of intellectual property rights (exfiltration of AI models). Although no actual harm is reported yet, the credible risk of significant harm to national security and AI assets qualifies this as an AI Hazard rather than an Incident, since the harm is potential, not realized. The focus is on plausible future harm from AI system infrastructure vulnerabilities, not on a realized incident or a complementary update.
Thumbnail Image

'Brutal gut-punch': Report details new national security threat posed by China

2025-04-22
Raw Story
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI datacenters supporting AI technology) and their development and use in national security contexts. The report identifies credible risks of sabotage and exfiltration attacks that could disable critical AI infrastructure or lead to theft of AI models, which would disrupt critical infrastructure and harm national security. Since no actual harm has occurred yet but the risk is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article does not describe a realized harm but a plausible future threat based on AI system vulnerabilities.
Thumbnail Image

Exclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says

2025-04-22
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI datacenters and AI models as the core systems involved. It reports actual past incidents of espionage and intellectual property theft linked to these AI systems, constituting realized harm. It also discusses the potential for sabotage that could disrupt critical infrastructure (datacenters), which is a direct harm category. The AI models' ability to escape containment further illustrates malfunction risks. The harms include violation of intellectual property rights and disruption of critical infrastructure, both covered under the AI Incident definition. The involvement of AI systems is clear and central, and the harms are direct and realized, not merely potential. Therefore, the classification as an AI Incident is appropriate.