AI-Powered Employee Surveillance Raises Privacy and Labor Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Major corporations are using AI systems from companies like Aware to monitor employee communications on platforms such as Slack, Teams, and Zoom. These AI tools analyze sentiment, flag behaviors, and sometimes identify individuals, raising significant concerns about privacy violations, chilling effects on workplace speech, and potential labor rights infringements.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems deployed to monitor employee communications, analyzing sentiment and detecting harmful behaviors. This use of AI directly affects employees' privacy and freedom, implicating human rights and labor rights violations. The monitoring is active and ongoing, with real companies using these tools, indicating realized harm rather than just potential risk. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and potential harm to employees.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityHuman wellbeingAccountabilityDemocracy & human autonomy

Industries
Business processes and support servicesDigital securityIT infrastructure and hosting

Affected stakeholders
Workers

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/Property

Severity
AI incident

Business function:
Human resource managementICT management and information securityMonitoring and quality controlCompliance and justice

AI system task:
Recognition/object detectionEvent/anomaly detectionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Starbucks, Walmart and other companies are using AI to track employees' messages, here's why - Times of India

2024-02-13
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to analyze employee messages and flag potential risks, indicating AI system involvement. However, it does not describe any actual harm or incidents resulting from this use, such as confirmed violations of privacy or labor rights, or any adverse outcomes for employees. The concerns raised are about privacy and freedom of speech, which are potential harms, but no direct or indirect harm has been reported as having occurred. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as privacy violations or suppression of labor rights in the future, but no incident has yet materialized.
Thumbnail Image

From Slack To Surveillance: Here's How Companies Monitor Your Communications With AI By Benzinga

2024-02-10
Investing.com UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems deployed to monitor employee communications, analyzing sentiment and detecting harmful behaviors. This use of AI directly affects employees' privacy and freedom, implicating human rights and labor rights violations. The monitoring is active and ongoing, with real companies using these tools, indicating realized harm rather than just potential risk. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and potential harm to employees.
Thumbnail Image

AI might be reading your Slack messages: 'A lot of this becomes thought crime'

2024-02-09
CNBC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems analyzing employee messages and detecting various behaviors, confirming AI system involvement. The use is in monitoring employee communications, which is a use case of AI. However, there is no direct evidence or report of actual harm occurring, such as employee rights violations or health impacts. The concerns expressed are about potential misuse and dystopian outcomes, which are plausible future harms but not confirmed incidents. Since the article mainly discusses the existence and use of the AI system and the societal concerns around it without reporting a specific incident or imminent hazard, it fits the definition of Complementary Information.
Thumbnail Image

Major companies are reportedly using this AI tool to track Slack and Teams messages from more than 3 million employees. Privacy experts are alarmed.

2024-02-12
Business Insider India
Why's our monitor labelling this an incident or hazard?
The AI system (Aware) is explicitly described as analyzing employee messages using large language models to detect toxic behavior and flag policy violations. Its outputs can lead to real consequences for employees, including disciplinary actions and terminations, which constitute harm to individuals' rights and workplace community harm. Privacy concerns and potential misuse or errors in AI decision-making further support the classification as an AI Incident. The involvement of AI in monitoring and flagging messages directly leads to realized harms, not just potential ones, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI monitoring employee comms for 'thought crimes' in Slack & more

2024-02-12
9to5Mac
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to monitor employee communications and flag potentially problematic behaviors, which directly implicates violations of human and labor rights. The use of AI for surveillance and analysis of employee content, including identification of individuals and sensitive behaviors, has led to concerns about privacy breaches and chilling effects on speech, which are harms to rights protected under applicable law. Therefore, this qualifies as an AI Incident due to realized harm related to rights violations stemming from AI use.
Thumbnail Image

AI might be reading your Slack messages: 'A lot of this becomes thought crime'

2024-02-09
NBC Chicago
Why's our monitor labelling this an incident or hazard?
The event involves AI systems analyzing employee communications, which fits the definition of an AI system. The AI's use in employee surveillance and risk detection is described, but no concrete harm (such as wrongful disciplinary actions, privacy breaches, or rights violations) is reported as having occurred. The concerns expressed by experts and the potential for chilling effects or privacy issues indicate plausible future harm. Therefore, the event is best classified as an AI Hazard, since the AI's use could plausibly lead to harms like violations of privacy or labor rights, but no direct or indirect harm has been documented in the article.
Thumbnail Image

AI might be reading your Slack messages: 'A lot of this becomes thought crime'

2024-02-09
NBC 5 Dallas-Fort Worth
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems analyzing employee messages to detect risk behaviors and sentiment, which constitutes AI system involvement. The use of AI for surveillance and monitoring employee communications directly impacts employee privacy and labor rights, fulfilling the criteria for harm to human rights and workplace conditions. The chilling effect on speech and potential disciplinary consequences based on AI-flagged content represent realized harms. Although the AI does not make final decisions, its outputs influence human actions that affect employees. Thus, the event meets the definition of an AI Incident due to direct and indirect harm caused by AI use in employee surveillance.
Thumbnail Image

AI Surveillance in the Workplace Causes Privacy Concerns | Cryptopolitan

2024-02-12
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for surveillance and analysis of employee communications, which fits the definition of an AI system. The concerns raised relate to potential privacy violations and chilling effects on workplace discourse, which are plausible harms. However, the article does not describe any actual harm or incident that has occurred due to these AI systems, only the potential for such harm and regulatory responses. Therefore, this qualifies as an AI Hazard because the development and use of these AI surveillance systems could plausibly lead to violations of privacy and workers' rights, but no specific incident of harm is reported.
Thumbnail Image

Major companies are reportedly using this AI tool to track Slack and Teams messages from more than 3 million employees. Privacy experts are alarmed.

2024-02-12
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as analyzing employee communications to flag policy violations and identify individuals, which can lead to disciplinary actions or other workplace consequences. This use of AI directly impacts employees' privacy and labor rights, fulfilling the criteria for harm under the AI Incident definition (violation of human rights or labor rights). The concerns raised by privacy experts and the potential for faulty decision-making further support the classification as an AI Incident rather than a hazard or complementary information. The involvement of AI in monitoring and flagging employee behavior, with real consequences, confirms this classification.
Thumbnail Image

Orwell's Vision Realized: CNBC Exposes How AI Amplifies Workplace Spying

2024-02-10
CryptoGlobe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems deployed for workplace surveillance that analyze employee communications, which directly implicates AI system use. The surveillance impacts employee privacy and labor rights, which are protected under human rights frameworks. The harm is realized as employees are monitored extensively, raising ethical and legal concerns. This fits the definition of an AI Incident as the AI system's use has directly led to violations of human rights and labor rights. The article does not merely discuss potential future harm or general AI developments but reports on current, active use causing harm.
Thumbnail Image

聊天內容無所遁形!星巴克等美企 利用AI監控員工情緒 - 自由財經

2024-02-10
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to monitor employee chats and emotions, analyzing billions of interactions to detect negative behaviors and risks. This AI use directly impacts employee privacy and labor rights, constituting a violation of fundamental rights protected by law. The involvement of AI in surveillance and emotional analysis of employees, without clear consent or safeguards, is a breach of obligations intended to protect labor rights. Hence, this qualifies as an AI Incident due to realized harm related to rights violations.
Thumbnail Image

利用AI監控員工情緒!Aware:可幫助公司瞭解員工 | 科技 | Newtalk新聞

2024-02-10
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Aware) used to monitor employee communications and emotions, analyzing billions of interactions. The system's role in surveilling employees and detecting sensitive behaviors like harassment and discrimination indicates a direct involvement in potential or actual violations of labor rights and privacy. The use of AI for such pervasive monitoring constitutes a breach of obligations intended to protect fundamental and labor rights. Therefore, this event qualifies as an AI Incident due to the direct harm to human rights and labor rights caused by the AI system's use.
Thumbnail Image

多家跨国企业借助AI监控员工工作信息,以实时了解员工情绪

2024-02-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to monitor employee communications and emotions, which qualifies as AI system involvement. The use of AI in this context directly impacts employees' privacy and labor rights, constituting a violation of rights under applicable law. The AI system's use in surveillance and emotional analysis of employees is a direct cause of harm to workers' rights and well-being, fulfilling the criteria for an AI Incident. Although the AI tool does not make decisions, its role in enabling invasive monitoring is pivotal to the harm described. Hence, this event is classified as an AI Incident.
Thumbnail Image

来了:美国大企业用AI监测员工聊天

2024-02-11
煎蛋
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems analyzing employee communications to identify risks and policy violations, which can lead to disciplinary actions. This use of AI directly affects employees' privacy and labor rights, constituting a violation of fundamental rights. The involvement of AI in monitoring and flagging employee behavior is central to the event, and the harms described (privacy invasion, potential wrongful disciplinary actions, chilling effects on speech) are realized and significant. Hence, this is an AI Incident as per the framework, involving AI use leading to violations of human and labor rights.
Thumbnail Image

星巴克等企業AI監控聊天 稱助了解員工情緒 專家憂侵害私隱及寒蟬效應

2024-02-11
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to monitor and analyze employee communications, which directly leads to concerns about privacy invasion and chilling effects in the workplace. The AI system's role in analyzing and flagging employee behavior and emotions constitutes a breach of privacy and labor rights, which are fundamental human rights. The article reports that this monitoring is actively occurring, not just a potential risk, thus the harm is realized. Hence, it meets the criteria for an AI Incident under violations of human rights and labor rights.
Thumbnail Image

These Major Companies Are Using AI To Snoop On Employees' Online Chats: Report

2024-02-26
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it scans and analyzes employee communications to detect behaviors like bullying and harassment, and to assess sentiment. The use of this AI system directly leads to concerns about violations of labor rights and privacy, which are human rights issues. The monitoring and analysis of employee messages without full transparency or consent can be considered a breach of obligations intended to protect fundamental and labor rights. The article reports actual use of the AI system by major companies, indicating realized harm or at least ongoing harm to employee privacy and rights. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

These major companies are using AI to snoop on employees' online chats

2024-02-25
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system developed by 'Aware' that scans employee chat messages on platforms like Slack and Microsoft Teams. This AI system is used in the development and use phases to monitor employees, which directly leads to potential violations of human and labor rights, such as privacy infringement and workplace surveillance without consent. The harm is realized as the system has already assessed billions of messages, and employees express discomfort and distrust, indicating harm to their rights and workplace environment. Hence, this qualifies as an AI Incident under the framework.
Thumbnail Image

These major companies are using AI to snoop through employees' messages, report reveals

2024-02-25
Fox Business
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Aware's software) to monitor employee messages, which is explicitly described. The AI system's use in surveilling employees' private communications can be reasonably linked to violations of labor rights and privacy, which are protected under applicable law. The article indicates that this monitoring is ongoing and has affected millions of employees, thus constituting realized harm. Therefore, this qualifies as an AI Incident due to violations of human rights/labor rights caused by the AI system's use.
Thumbnail Image

These huge companies are using AI to snoop through employees' messages, report says

2024-02-26
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to analyze employee messages and images to detect behaviors such as bullying, harassment, and discrimination. This use of AI directly impacts employees' privacy and could lead to violations of labor and human rights. The article reports that this monitoring is actively occurring, not just a potential risk, and includes concerns about chilling effects on speech, which is a form of harm to individuals and communities. Therefore, the event meets the criteria for an AI Incident due to realized harm linked to AI use in workplace surveillance.
Thumbnail Image

Walmart, Delta, Starbucks among major companies using AI tool to snoop on employees: Report

2024-02-26
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system developed by Aware that monitors employee communications to detect dissatisfaction, harassment, and bullying. This AI system is actively used by major companies, including Walmart, Delta, and Starbucks, to surveil employees. The surveillance raises concerns about privacy breaches and employee rights violations, which are harms to human rights and labor rights as defined. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Big Boss is Watching: How US companies are using AI to snoop on employees

2024-02-26
Firstpost
Why's our monitor labelling this an incident or hazard?
The software from Aware is an AI system that analyzes large volumes of employee communications to flag risks and sentiments. Its use has led to concerns about violations of worker rights and privacy, which are human rights and labor rights issues. The AI system's deployment in monitoring employees' private communications and behavior constitutes a breach of obligations intended to protect fundamental and labor rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harms to employees' privacy and workplace freedom.