Slack AI Vulnerability Exposes Private Channel Data

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Salesforce's Slack AI had a vulnerability allowing attackers to access private channel data via prompt injection, a flaw in its large language model. Security firm PromptArmor discovered this issue, which could lead to unauthorized data access and phishing attacks. Slack has since patched the flaw to protect user privacy.[AI generated]

Why's our monitor labelling this an incident or hazard?

Slack AI’s flaw enabled a novel method for hackers to exploit the chatbot and deliver malware, creating a clear risk of data breaches and harm. Because the vulnerability was discovered and patched before documented damage occurred, this is a plausible future harm rather than a realized incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityRespect of human rightsSafetyAccountability

Industries
IT infrastructure and hostingDigital securityBusiness processes and support services

Affected stakeholders
WorkersBusiness

Harm types
Human or fundamental rightsEconomic/PropertyReputational

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Workplace affairs could be exposed as Slack flaw gives hackers access

2024-08-22
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Slack AI’s flaw enabled a novel method for hackers to exploit the chatbot and deliver malware, creating a clear risk of data breaches and harm. Because the vulnerability was discovered and patched before documented damage occurred, this is a plausible future harm rather than a realized incident.
Thumbnail Image

Slack could be snooping in on your private conversations | Digital Trends

2024-08-22
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article describes a security firm’s proof‐of‐concept showing how malicious prompts could exploit Slack’s AI to exfiltrate private data and facilitate phishing, posing a credible risk of privacy breach and harm. Since the described harms are potential rather than actualized incidents, this qualifies as an AI Hazard.
Thumbnail Image

Slack AI could be tricked into leaking login details and more

2024-08-22
TechRadar
Why's our monitor labelling this an incident or hazard?
Slack’s AI assistant—a deployed AI system—was directly induced to leak confidential information, demonstrating a realized security breach facilitated by the AI’s behavior. This constitutes an AI Incident because the AI’s improper outputs have already led to unauthorized data exfiltration.
Thumbnail Image

Slack security crack: Its AI feature can breach your private conversations, according to report

2024-08-21
Mashable
Why's our monitor labelling this an incident or hazard?
The article describes a security vulnerability in Slack’s AI system that could plausibly lead to unauthorized data exposure and phishing attacks. No actual breach or harm occurred, but the AI’s design flaw creates a credible risk, fitting the definition of an AI Hazard.
Thumbnail Image

Slack AI Vulnerability Could Expose Data From Private Channels: Report - Decrypt

2024-08-22
Decrypt
Why's our monitor labelling this an incident or hazard?
The article describes a newly revealed security weakness in an AI system (Slack AI) that researchers have exploited in a proof-of-concept demonstration to steal sensitive data. While no large-scale breach has yet occurred, the flaw creates a credible pathway for malicious actors to compromise confidential information. This constitutes a plausible future harm stemming from an AI system’s design and use rather than a fully realized incident, fitting the definition of an AI Hazard.
Thumbnail Image

Slack AI can leak private data via prompt injection

2024-08-21
theregister.com
Why's our monitor labelling this an incident or hazard?
The article describes a security flaw in Slack AI’s design—prompt injection—that has not yet been reported as exploited in production but clearly could lead to unauthorized data exfiltration and privacy breaches. This is a potential risk (plausible future harm) rather than a confirmed incident.
Thumbnail Image

Slack Patches AI Bug That Exposed Private Channels

2024-08-22
Dark Reading
Why's our monitor labelling this an incident or hazard?
The issue stems from Slack AI (an AI system) misprocessing maliciously crafted inputs, creating a scenario where attackers could misuse the model’s outputs for data exfiltration or phishing. Although significant harm was possible, no confirmed breach or user harm occurred. This aligns with an AI Hazard—an AI‐related vulnerability that could plausibly lead to an incident.
Thumbnail Image

Slack AI vulnerability can expose sensitive details of private groups

2024-08-22
Android Headlines
Why's our monitor labelling this an incident or hazard?
The Slack AI system is explicitly involved as it processes conversation data and responds to prompts. The vulnerability (prompt injection) is a malfunction or misuse of the AI system that directly leads to unauthorized disclosure of sensitive information, which is a violation of privacy and a breach of data security, falling under harm to property and potentially human rights. The fact that the vulnerability has been found and partially fixed indicates realized harm or at least a direct risk of harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's malfunction and the harm caused or potentially caused.
Thumbnail Image

Slack patches Slack AI issue that could have allowed insider phishing

2024-08-22
SC Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Slack AI) whose malfunction (indirect prompt injection vulnerability) was exploited by an insider attacker to deliver phishing links and exfiltrate sensitive data from private channels. This directly led to harm in terms of potential data breaches and phishing attacks, which are violations of user privacy and security rights. The incident has been realized as proof-of-concept exploits were demonstrated, and a patch was deployed to address the issue. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction and use.
Thumbnail Image

Hackers could dupe Slack's AI features to expose private channel messages

2024-08-22
ITPro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Slack AI, an LLM-powered tool) whose malfunction via prompt injection attacks could directly lead to unauthorized disclosure of sensitive private data, a clear harm to privacy and potentially human rights. The report details how attackers could exploit the AI system's behavior to exfiltrate confidential information and conduct phishing attacks. Although Slack has patched the vulnerability and no confirmed data breaches are reported, the incident describes an actual exploitable flaw that has led to a credible risk of harm. Therefore, it meets the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Slack AI Could Be Tricked Into Leaking Credentials, More - Ny Breaking News

2024-08-22
NY Breaking News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Slack's AI assistant using an LLM) whose misuse directly causes harm by leaking sensitive information to unauthorized parties. This constitutes a violation of rights and harm to property (confidential data). The harm is realized, not just potential, as criminals have demonstrated the ability to steal API keys and files. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's misuse and the harm caused.