AI Employee Monitoring System Sparks Privacy and Labor Rights Controversy in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese tech firm Sangfor's AI-powered 'Behavior Perception System' monitored employees' online activities to predict resignation risk, leading to at least one employee's dismissal after job-seeking was detected. The system's use has raised significant concerns over privacy violations and legality, prompting public backlash and the removal of product information from the company's website.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved, as the '离职倾向分析' (resignation tendency analysis) uses employee internet behavior data to predict their likelihood of leaving. The system's use has directly led to concerns about violations of labor rights and privacy, which are protected under applicable laws. The monitoring and data collection without clear consent or legal basis constitute a breach of fundamental labor rights. Therefore, this qualifies as an AI Incident due to violations of human rights and labor rights caused by the AI system's use.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityHuman wellbeingDemocracy & human autonomy

Industries
Business processes and support servicesDigital securityIT infrastructure and hosting

Affected stakeholders
Workers

Harm types
Human or fundamental rightsEconomic/PropertyPsychologicalReputational

Severity
AI incident

Business function:
Human resource managementMonitoring and quality control

AI system task:
Forecasting/predictionEvent/anomaly detection

In other databases

Articles about this incident or hazard

Thumbnail Image

深信服"离职倾向分析"专利曝光:能通过公司网络监控员工上网行为

2022-02-12
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the '离职倾向分析' (resignation tendency analysis) uses employee internet behavior data to predict their likelihood of leaving. The system's use has directly led to concerns about violations of labor rights and privacy, which are protected under applicable laws. The monitoring and data collection without clear consent or legal basis constitute a breach of fundamental labor rights. Therefore, this qualifies as an AI Incident due to violations of human rights and labor rights caused by the AI system's use.
Thumbnail Image

要不是深信服被曝光 你都不知道自己的"裤子"被老板扒了 - cnBeta.COM 移动版

2022-02-14
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes user behavior logs to infer risks and monitor employees. Its use has directly led to harm by violating employees' rights to privacy and potentially labor rights, as employees are monitored without informed consent, leading to distress and unfair treatment (e.g., being fired on the first day with detailed monitoring evidence). This fits the definition of an AI Incident under violations of human rights or labor rights. The article details realized harm rather than potential harm, so it is not a hazard or complementary information.
Thumbnail Image

深信服陷"监控员工系统"风波,官网已删除光大银行等合作案例

2022-02-13
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes employee behavior data to generate predictions about employee turnover intentions. Its use has led to public outcry over privacy violations, which constitute a breach of fundamental rights. The involvement of the AI system in monitoring and analyzing personal data without clear consent directly implicates it in violations of human rights and privacy. Therefore, this event qualifies as an AI Incident due to realized harm related to rights violations caused by the AI system's use.
Thumbnail Image

职场数字化管理千万别用力过猛

2022-02-15
光明网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-enabled behavior sensing system used for employee monitoring that led to concrete consequences for employees, including dismissal after being identified as job hunting during work hours. This constitutes a violation of employee privacy and labor rights, fulfilling the criteria for an AI Incident under the framework. The harm is realized, not just potential, and the AI system's use is directly linked to the harm. Therefore, the event is classified as an AI Incident.
Thumbnail Image

起底深信服的"员工监控"生意 - cnBeta.COM 移动版

2022-02-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing employee online behavior to predict turnover risk and monitor work efficiency. The system's use has directly led to violations of employee privacy and labor rights, including unlawful dismissals based on AI-generated data. The article documents realized harm to employees and legal concerns about the system's use, fulfilling the criteria for an AI Incident. The AI system's development and deployment have directly contributed to these harms, making this a clear case of AI Incident rather than a hazard or complementary information.
Thumbnail Image

深信服"离职倾向分析"专利曝光:能通过公司网络监控员工上网行为 - cnBeta.COM 移动版

2022-02-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The described system uses AI to monitor and analyze employee online behavior to predict resignation tendencies, which directly involves AI system use. The detailed monitoring and data collection intrude on employee privacy and labor rights, which are protected under applicable laws. The event reports the system's existence and use, with public concern about legality, indicating realized or ongoing harm related to rights violations. Therefore, this is an AI Incident due to violations of labor and possibly human rights caused by the AI system's use.
Thumbnail Image

深信服可监测员工跳槽倾向系统引争议 律师:若员工不知情则涉嫌违法 - 警告! - cnBeta.COM

2022-02-13
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The described system qualifies as an AI system because it analyzes complex behavioral data to infer employee turnover risk, which involves AI techniques such as pattern recognition and predictive analytics. The system's use has directly led to potential violations of personal privacy and labor rights, as it monitors sensitive employee data without clear informed consent, which is a breach of applicable laws like the Personal Information Protection Law. The controversy and legal concerns indicate realized harm to employees' rights, fitting the definition of an AI Incident involving violations of human rights and legal obligations. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

知乎称未使用行为感知系统监测员工:坚决反对违规收集个人信息 - 最新消息 - cnBeta.COM

2022-02-14
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The described behavior perception system qualifies as an AI system because it analyzes employee online behavior to infer intentions, a task involving AI inference from input data. The alleged use of this system to monitor employees' private activities and infer resignation intentions constitutes a violation of labor rights and personal privacy, which falls under harm category (c) - violations of human rights or labor rights. Although Zhihu denies using the system, the report indicates the system's deployment and its monitoring capabilities, implying realized harm or at least a direct link to rights violations. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use in employee surveillance and privacy infringement.
Thumbnail Image

离职倾向分析服务曝光:在职员工网上投简历被记录、提前遭裁员 - 视点·观察 - cnBeta.COM

2022-02-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used for employee behavior analysis and prediction of resignation tendencies, which directly impacts employees' rights and privacy. The use of such AI systems has led to adverse employment actions, including dismissals, which constitute violations of labor rights and personal privacy. The unauthorized mass data scraping also suggests potential breaches of legal obligations regarding personal data protection. Therefore, the event meets the criteria for an AI Incident due to violations of human rights and labor rights caused by the AI system's use and data practices.
Thumbnail Image

这套监控系统离职倾向、摸鱼通通都能监测 争议出现后产品页面已404 - AI 人工智能 - cnBeta.COM

2022-02-14
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The described system qualifies as an AI system because it performs deep modeling and analysis of user behavior based on large-scale internet usage logs to infer outputs such as resignation risk and productivity metrics. The event involves the use of this AI system in a way that has directly led to concerns about violations of personal privacy rights, a form of human rights violation under the framework. The controversy and public reaction indicate that harm to rights has occurred or is ongoing due to the system's deployment and use. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of an AI system causing or contributing to harm related to privacy and rights violations.
Thumbnail Image

深信服可监测员工跳槽倾向系统引争议 律师:若员工不知情则涉嫌违法

2022-02-13
东方财富网
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes complex behavioral data to infer employee turnover risk. Its use has directly led to harm in terms of violations of employee privacy rights and potential breaches of applicable laws such as personal information protection laws. The controversy and legal opinions highlight that the system's use without employee knowledge is likely illegal, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing violations of rights and potential legal breaches.
Thumbnail Image

“监控员工离职倾向系统”引争议 600亿A股公司紧急下架产品介绍

2022-02-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The system described is an AI system because it performs behavior analysis and risk prediction based on employee online activities, which goes beyond simple software functions. The use of this system has directly led to harm, including employee dismissals based on AI-generated risk assessments, constituting violations of labor rights and privacy. The article documents realized harm (employees being dismissed after detection by the system) and legal and ethical concerns, fulfilling the criteria for an AI Incident. The controversy and product removal further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

深信服陷“监控员工系统”风波 官网已删除光大银行等合作案例

2022-02-13
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system designed to monitor and analyze employee behavior to predict turnover intentions, which constitutes AI system involvement. The use of this system has directly led to harm by infringing on employees' privacy rights and potentially labor rights, as indicated by legal expert commentary and public outcry. The system's monitoring of personal internet activity without clear employee consent constitutes a violation of personal privacy, a recognized harm under the framework. The controversy and removal of cooperation cases suggest misuse or problematic deployment, reinforcing the classification as an AI Incident rather than a mere hazard or complementary information. Hence, the event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

起底深信服的"员工监控"生意-36氪

2022-02-15
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as monitoring and analyzing employee online behavior to predict departure risk, which directly leads to violations of employee privacy and labor rights. The system's use without employee consent and its role in influencing employment decisions constitute direct harm under the framework's criteria for AI Incidents, specifically violations of human rights and labor rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

投简历、登录求职网站、看视频被公司监控?这套"神奇系统"引热议,背后公司回应...-36氪

2022-02-13
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The behavior awareness system described is an AI system that analyzes large volumes of user behavior data to infer employee intentions and work patterns. Its deployment has directly led to privacy violations and potential breaches of labor rights, as employees are monitored extensively without clear consent. The system's role is pivotal in causing these harms, fulfilling the criteria for an AI Incident under violations of human rights and labor rights. The article documents realized harm rather than potential harm, and the system's use is central to the incident.
Thumbnail Image

开发监控员工离职倾向系统,半年营收超25亿元,这家公司火了-36氪

2022-02-15
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The system described is an AI system performing deep behavioral analysis and prediction based on large-scale data collection of employee online activities. Its use for monitoring resignation tendencies directly implicates violations of privacy and labor rights, which are recognized harms under the AI Incident definition. The article reports the system's active use and public controversy, indicating realized harm rather than just potential risk. The involvement of the AI system in the development and use stages is clear, and the harm is linked to violations of rights and privacy. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

这套监控系统让打工人颤抖:离职倾向、摸鱼通通都能被监测,争议出现后产品页面已404-36氪

2022-02-14
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The described system involves AI-based behavior modeling and analysis to monitor employees' online activities and predict resignation risk. The system's use has directly led to privacy concerns and potential violations of labor rights, as employees' personal and work-related data are monitored without clear consent. This constitutes a breach of obligations intended to protect fundamental and labor rights, fitting the definition of an AI Incident. The controversy and removal of the product page further indicate the realized harm and public recognition of these issues.
Thumbnail Image

知乎被传裁员牵出监控员工系统,开发公司深信服利润大降-36氪

2022-02-15
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The behavior perception system described is an AI system that analyzes employee internet behavior to predict resignation tendencies, involving sophisticated data collection and analysis. Its use has directly led to privacy violations and potential breaches of personal information protection laws, which are violations of fundamental rights under applicable law. The article reports actual use and harm (privacy infringement), not just potential risk, thus qualifying as an AI Incident. The controversy and legal concerns further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

现在离职整的和谍战一样-36氪

2022-02-14
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing employee behavior data to predict turnover risk. The system's use leads to direct harm by infringing on employee privacy and labor rights, as employees are monitored extensively and potentially dismissed based on AI predictions. The article details actual use cases and consequences, not just potential risks, fulfilling the criteria for an AI Incident. The harm includes violation of human rights and labor rights through invasive surveillance and possible wrongful termination, which are direct outcomes of the AI system's deployment.
Thumbnail Image

"监控员工跳槽倾向"引争议,深信服紧急下架产品,律师称未告知员工涉嫌违法

2022-02-15
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The system described is an AI system as it performs deep modeling and analysis of user behavior data to generate outputs such as employee risk ratings and predictions of departure tendencies. The use of this AI system has directly led to harm in the form of privacy violations and potential breaches of labor rights, as employees were monitored without proper notification or consent. The controversy and legal opinions confirm that these harms have materialized. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing violations of rights and harm to individuals.
Thumbnail Image

监测员工跳槽倾向系统引争议 律师:员工不知情则涉嫌违法

2022-02-13
驱动之家
Why's our monitor labelling this an incident or hazard?
The system uses AI to analyze employee network behavior and predict turnover risk, which constitutes an AI system's use. The lack of employee knowledge and consent in processing personal data likely violates legal protections, constituting a breach of labor and personal rights. This is a direct violation of rights caused by the AI system's use, qualifying as an AI Incident under the framework.
Thumbnail Image

要不是深信服被曝光 你都不知道自己的"裤子"被老板扒了

2022-02-14
驱动之家
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes employee online behavior logs to infer risks such as employee productivity, potential resignation, and other personal activities. The use of this system has directly led to harm in the form of privacy violations and labor rights infringements, as employees are monitored without informed consent and subjected to surveillance that affects their work conditions and personal dignity. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

离职倾向分析服务曝光:在职员工网上投简历被记录、提前遭裁员

2022-02-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used for monitoring and analyzing employee behavior to predict resignation tendencies. The use of this AI system has directly led to harm, including employees being dismissed after their job-seeking activities were detected, which constitutes a violation of labor rights and privacy. The large-scale unauthorized data scraping also indicates breaches of intellectual property and privacy rights. These harms fall under violations of human and labor rights as defined in the framework, qualifying this as an AI Incident.
Thumbnail Image

离职倾向分析系统惹争议!深信服官网已下架产品介绍页面

2022-02-14
驱动之家
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes employee online behavior to predict resignation intentions, involving sophisticated data processing and inference. Its use has directly led to harm by violating employee privacy and labor rights, as indicated by public controversy and a reported case of suspected monitoring leading to dismissal. The system's role is pivotal in causing these harms. The removal of the product page suggests acknowledgment of the controversy but does not negate the realized harm. Hence, this event meets the criteria for an AI Incident due to violations of labor rights and privacy caused by the AI system's use.
Thumbnail Image

有公司搭建员工行为监测系统 提前获知员工跳槽意向

2022-02-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The system described involves AI capabilities for behavior monitoring and prediction based on multiple data inputs. The use of such a system to monitor employees without their consent likely constitutes a violation of labor rights and privacy, which falls under violations of human rights or labor rights as defined. Therefore, this event qualifies as an AI Incident due to the direct use of AI leading to potential or actual rights violations.
Thumbnail Image

深信服销售回应离职倾向分析服务监控:很多大厂在用 非常合法

2022-02-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the service analyzes employee behavior to predict attrition, which is a typical AI application involving data analysis and prediction. The use of this system has directly led to harm, as an employee suspects their information was used leading to dismissal. This constitutes a violation of labor rights and privacy, fitting the definition of an AI Incident due to harm to individuals and potential rights violations caused by the AI system's use.
Thumbnail Image

深信服被曝提供离职倾向分析服务 早已申请专利

2022-02-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The described system qualifies as an AI system because it processes behavioral data to infer predictions about employee intentions, which is a typical AI application. The use of such a system for monitoring employees' online behavior to predict resignation could lead to violations of labor rights or privacy if deployed without proper safeguards. However, the article only reports the patent application and the existence of the system, with no indication that it has been deployed or caused harm yet. Therefore, this event represents a plausible risk of harm (e.g., privacy violations, labor rights infringements) but no realized harm is reported.
Thumbnail Image

媒体谈“监控员工离职倾向系统”:职场数字化管理别用力过猛

2022-02-14
The Paper
Why's our monitor labelling this an incident or hazard?
The described behavior sensing system qualifies as an AI system because it infers from employee input data (e.g., website visits, chat keywords) to generate outputs about employee behavior and risk of departure. The system's use has directly led to harm, including violation of employee privacy and labor rights, as evidenced by the reported dismissal following monitoring. This constitutes a violation of human rights and labor rights under applicable law, fulfilling the criteria for an AI Incident. The article also discusses broader societal and legal implications, but the primary focus is on realized harm caused by the AI system's use in employee monitoring and management.
Thumbnail Image

知乎声明:未使用“行为感知系统”监测员工,坚决反对违规收集个人信息

2022-02-14
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article involves an AI-related system (behavior perception system) that could analyze employee behavior, implying AI system involvement. However, the company denies using it, and no actual harm or violation has been confirmed or occurred. The event centers on the controversy, denial, and potential privacy concerns but does not document realized harm or direct AI system use causing harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about AI monitoring systems and corporate responses, fitting the definition of Complementary Information.
Thumbnail Image

如此"科技创新",还有底线吗?(组图) - 时评 - 鱼眼观察

2022-02-15
看中国
Why's our monitor labelling this an incident or hazard?
The described monitoring system qualifies as an AI system because it performs behavior analysis and prediction tasks based on employee data. Its use has directly led to harm by infringing on employees' privacy and violating legal protections, as it collects and processes personal data without proper notification or consent. The article details realized harm rather than potential harm, and the system's deployment over several years with widespread use further confirms the incident status. The involvement of the AI system in causing rights violations and privacy harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

网曝某公司使用"行为感知系统":监视聊天、搜索动作,防止员工跳槽

2022-02-11
和讯网
Why's our monitor labelling this an incident or hazard?
The 'behavior perception system' qualifies as an AI system because it analyzes complex behavioral data such as chat content and search keywords to infer employee intentions, which goes beyond simple software monitoring. The use of this system directly leads to a violation of labor rights and potentially privacy rights, as it surveils employees without their consent to prevent job changes. This constitutes a breach of obligations under applicable law protecting labor rights, thus meeting the criteria for an AI Incident.
Thumbnail Image

网上求职、看视频被单位监控?“神奇系统”引热议,背后公司回应来了

2022-02-13
和讯网
Why's our monitor labelling this an incident or hazard?
The behavior awareness system is an AI system that analyzes employee online behavior to infer turnover risk and monitor work activities. Its use has directly led to privacy violations and potential breaches of labor rights, as employees are monitored extensively without clear consent, which is a violation of fundamental rights. The AI system's role is pivotal in enabling this intrusive surveillance. Hence, the event meets the criteria for an AI Incident involving violations of human rights and labor rights.
Thumbnail Image

深信服下架离职倾向分析系统介绍页

2022-02-14
金融界网
Why's our monitor labelling this an incident or hazard?
The described 'behavior perception system' implies AI involvement due to its monitoring and predictive functions related to employee behavior. However, there is no mention of any harm, violation of rights, or malfunction caused by the system. The removal of the system's introduction page suggests a change in availability or company policy but does not itself constitute an incident or hazard. The article mainly provides background and company information, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

离职倾向分析服务曝光:在职员工网上投简历被记录、提前遭裁员

2022-02-12
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used for 'resignation tendency analysis' that monitors employees' online behavior and predicts their likelihood to leave, which is then used by employers to take adverse actions such as early dismissal. This use of AI directly leads to violations of labor rights and privacy, fulfilling the criteria for an AI Incident under the framework. Additionally, the alleged unauthorized scraping of personal data further supports the violation of rights. Hence, this is classified as an AI Incident due to realized harm involving human rights and labor rights violations caused by the AI system's use.
Thumbnail Image

深信服“离职倾向分析”专利曝光:能通过公司网络监控员工上网行为

2022-02-12
和讯网
Why's our monitor labelling this an incident or hazard?
The patented system explicitly uses AI to analyze employee internet behavior to predict turnover, which is an AI system by definition. The event reports actual use of this system leading to employee monitoring and a case of a person suspecting their dismissal was linked to this monitoring, indicating realized harm. The harm involves violation of labor rights and privacy, which are protected human rights. The system's role is pivotal in causing this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

班主任的行为感知系统是如何监控的?

2022-02-15
ChinaByte比特网_报道IT中国,专注IT新闻、评论、信息化
Why's our monitor labelling this an incident or hazard?
The described 'Behavior Awareness System' is an AI system that analyzes large volumes of user behavior data to infer employee intentions and monitor activities. Its deployment and use have directly led to violations of employee privacy and potentially labor rights, constituting harm under the framework. The article details actual monitoring practices and their implications, not just potential risks, so this qualifies as an AI Incident. The harms include violations of human rights and privacy, which are explicitly recognized in the definitions. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

投简历、登录求职网站、看视频被公司监控?这套"神奇系统"引热议,背后公司回应.

2022-02-13
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The behavior awareness system uses AI techniques to analyze large volumes of user behavior data to infer sensitive information such as resignation intentions and work efficiency. This constitutes an AI system as it performs deep modeling and analysis of user behavior. The system's use by companies to monitor employees' private activities without clear consent leads to violations of privacy and labor rights, which are harms under the framework. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in employee surveillance and privacy infringement.
Thumbnail Image

惊了,你想不想离职、有没有到处投简历,你的老板们都门儿清?

2022-02-12
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing large-scale employee online behavior data to predict departure intentions, which is a clear AI system use case. The system's use has directly led to harm in terms of privacy violations and potential labor rights infringements, as employees are monitored extensively and possibly penalized based on AI-generated risk assessments. The article also references legal and ethical concerns, public backlash, and a related court case, reinforcing the realized harm. Hence, this qualifies as an AI Incident due to direct harm to individuals' rights and privacy caused by the AI system's deployment and use.
Thumbnail Image

"监控员工离职倾向"引争议,媒体:别把打工人当机器人

2022-02-14
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes employee behavior data to predict their likelihood of leaving, which fits the definition of an AI system influencing decisions and monitoring individuals. Its use has directly led to harm in terms of violating employees' rights to privacy and creating a hostile work environment, which falls under violations of human rights and labor rights. The article highlights the controversy and ethical issues arising from this monitoring, confirming that harm has occurred. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

上班时投简历,约谈后被裁员:职场监控的边界在哪里?

2022-02-14
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (behavior sensing system) used to monitor employees' online behavior and predict their resignation tendencies. The system's use has directly led to harm in the form of privacy violations and labor rights infringements, as evidenced by public controversy and a legal case where an employee was unlawfully terminated related to surveillance practices. The AI system's role is pivotal in enabling invasive monitoring and data analysis that infringes on employees' rights. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

投简历、登录求职网站、看视频被公司监控?这套“神奇系统”引热议,背后公司回应...

2022-02-13
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The system described is an AI system because it performs deep modeling and analysis of user behavior data to generate outputs such as predictions of employee turnover risk and detection of potential data leaks. The use of this AI system for employee monitoring has directly led to concerns about violations of employee privacy rights, which are fundamental rights protected by law. The article reports that the system is actively used by companies to monitor employees, indicating realized harm in terms of privacy infringement. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing harm related to human rights violations (privacy).
Thumbnail Image

投简历、登陆求职网站被公司监控?深信服官网已检索不到“行为感知系统BA”产品 2022-02-13 13:31

2022-02-13
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The behavior awareness system is an AI system that analyzes employee behavior data to generate outputs such as turnover risk predictions and work efficiency assessments. Its use by employers to monitor employees' private activities, including job hunting behavior, has directly led to privacy concerns and potential violations of labor rights. The article reports that this monitoring is happening and has caused public controversy, indicating realized harm. The AI system's role is pivotal as it enables detailed surveillance and analysis that would not be feasible otherwise. Therefore, this event qualifies as an AI Incident due to violations of human rights and labor rights stemming from the AI system's use.
Thumbnail Image

监测员工离职倾向,是“技术作恶”吗?

2022-02-15
广西新闻网
Why's our monitor labelling this an incident or hazard?
The article centers on the potential and controversial use of AI-powered behavior monitoring systems for employee surveillance, raising privacy and ethical concerns. However, it does not describe a realized harm or incident caused by the AI system, nor does it report a near-miss or credible risk event. It mainly provides contextual discussion and societal reaction to the technology's use and its implications, without detailing a specific AI Incident or Hazard. Therefore, it fits best as Complementary Information, providing background and societal context rather than reporting a new AI Incident or Hazard.
Thumbnail Image

鱼眼观察|如此"科技创新",还有底线吗? - 中国数字时代

2022-02-14
China Digital Times
Why's our monitor labelling this an incident or hazard?
The described monitoring system qualifies as an AI system because it performs behavior analysis and prediction based on employee data, which involves AI techniques. The system's use has directly led to harm in the form of privacy violations and breaches of personal information protection laws, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the system is actively used to monitor employees without proper consent, causing violations of rights and legal obligations. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

网友曝光"员工行为感知系统":可提前获知员工跳槽意向

2022-02-11
163.com
Why's our monitor labelling this an incident or hazard?
The system described involves AI capabilities such as monitoring and analyzing employee behavior data (e.g., website visits, keyword detection in chats) to infer intentions, which qualifies as an AI system. The use of this system to monitor employees without their consent likely constitutes a violation of labor rights and privacy, thus causing harm to individuals' rights. Therefore, this event qualifies as an AI Incident due to the violation of human rights through the use of AI for intrusive employee surveillance.
Thumbnail Image

深信服被曝提供离职倾向分析服务,2018 年该公司已申请专利

2022-02-12
163.com
Why's our monitor labelling this an incident or hazard?
The described system uses AI to analyze employee internet behavior data to infer resignation tendencies, which implicates privacy and labor rights concerns. However, the article does not report any realized harm or incidents resulting from this system's use, only the existence and patenting of the technology. Therefore, it represents a potential risk or hazard related to employee privacy and labor rights but no direct or indirect harm has been reported yet.
Thumbnail Image

这套监控系统让打工人颤抖:离职倾向、摸鱼通通都能被监测

2022-02-14
163.com
Why's our monitor labelling this an incident or hazard?
The system described is an AI system because it uses behavior modeling and data mining to analyze employee online activities and predict turnover risk. Its use has caused realized harm by infringing on employee privacy and potentially leading to unjust employment actions, which constitute violations of human rights and labor rights. The article reports actual use and consequences, not just potential risks, so this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

深信服可监测员工跳槽倾向系统引争议 律师:若员工不知情则涉嫌违法

2022-02-13
163.com
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes employee behavior data to predict turnover risk. Its use has directly led to harm in the form of privacy violations and potential unlawful dismissal, which are breaches of fundamental rights and labor protections. The monitoring and analysis of personal online behavior without clear informed consent or legal basis constitutes a violation of human rights and applicable law. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use in employee monitoring and decision-making.
Thumbnail Image

互联网企业监控员工上班引热议,是时候关注员工体验了?!

2022-02-13
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a behavior monitoring and analysis tool that uses data to predict employee turnover risk. The system's use has directly led to harm, including violations of employee privacy and labor rights, as well as adverse effects on employee morale and job security. These harms fall under the definition of an AI Incident, as the AI system's use has directly led to violations of rights and harm to employees. The article also discusses broader societal reactions and governance issues, but the primary focus is on the realized harms caused by the AI system's deployment.
Thumbnail Image

深信服官网下架"监测离职员工倾向"产品介绍页面

2022-02-14
163.com
Why's our monitor labelling this an incident or hazard?
The described system qualifies as an AI system because it uses behavior analysis to predict employee resignation risk and work efficiency, which involves AI inference from input data. The event concerns the use and development of this AI system. However, there is no indication that the system has directly or indirectly caused harm such as violations of labor rights or privacy breaches. The removal of the product page and company statements suggest concerns or controversies but do not document an actual incident of harm. Therefore, this event is best classified as Complementary Information, providing context and updates about an AI system with potential implications but no confirmed incident or hazard of harm at this time.
Thumbnail Image

监控员工跳槽被裁,简历投递次数一目了然!律师:侵犯个人隐私权

2022-02-14
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used for monitoring and analyzing employee behavior to predict turnover risk. The system's deployment has directly led to harm in the form of privacy violations and potential wrongful dismissal, which constitute violations of personal rights under applicable law. The lack of employee consent and transparency further supports the classification as an AI Incident involving violations of human rights and privacy. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

深信服:并未利用爬虫技术窃取2.1亿条简历数据

2022-02-13
163.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of employee behavior monitoring software that predicts turnover intentions, which implies AI-based data analysis. However, the main legal incident concerns another company accused of illegal data scraping, not DeepinFeng. DeepinFeng denies involvement in the scraping incident. There is no direct or indirect harm reported from DeepinFeng's AI system in this article. The discussion about the software's capabilities and the legal case provides additional context and background rather than reporting a new AI Incident or Hazard. Therefore, the classification as Complementary Information is appropriate.
Thumbnail Image

投简历、登陆求职网站被公司监控?深信服官网已检索不到"行为感知系统BA"产品

2022-02-13
163.com
Why's our monitor labelling this an incident or hazard?
The behavior awareness system is an AI system that analyzes employee behavior data to infer turnover risk and monitor work activities. Its use by companies to surveil employees' job-seeking and communication activities without clear consent constitutes a violation of privacy and labor rights, which are protected under applicable laws. The AI system's deployment has directly led to these harms, fulfilling the criteria for an AI Incident. Although the system developer claims neutrality, the actual use by employers causes the harm. Hence, this event is classified as an AI Incident due to realized violations of rights stemming from AI-enabled employee monitoring.
Thumbnail Image

企业管理别不近人情

2022-02-15
中国经济网
Why's our monitor labelling this an incident or hazard?
The described behavior monitoring system qualifies as an AI system because it analyzes employee behavior data to infer resignation tendencies and work status, which involves sophisticated data processing and prediction. The system's use has directly led to concerns and potential violations of employees' privacy rights, a form of human rights violation. The article details actual use and impact, not just potential risks, thus constituting an AI Incident. The discussion of legal cases and privacy laws further supports that harm related to rights violations is occurring or has occurred due to the AI system's deployment.
Thumbnail Image

监控员工跳槽动向涉侵权,但监控系统无原罪_谈经论政_红辣椒评论

2022-02-14
红网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (behavior perception system) used to monitor employees' job-seeking activities and communications, which is a clear AI system involvement. The use of this system by the company without employee consent has directly led to violations of privacy rights and personal information protection laws, which are breaches of fundamental rights. This constitutes harm under the AI Incident definition (c). Although the developer is not responsible for the misuse, the company's use of the AI system caused the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

《科创板日报》(上海,记者 黄心怡 张洋洋)讯, 有网友在社交平台爆料称,因为在上班时间向招聘网站投递简历,在领导约谈后被裁员,而该公司采用了深信服系统来分析员工离职倾向。

2022-02-13
证券之星
Why's our monitor labelling this an incident or hazard?
The system described is an AI system that analyzes employee online behavior to predict turnover risk. Its use has directly led to an employee being laid off after monitoring their job-seeking activity, constituting harm to labor rights and privacy (a violation of human rights and labor rights). The system's operation involves processing personal data without clear employee consent, which may be illegal under applicable laws. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm involving rights violations and potential unlawful practices.
Thumbnail Image

近日,就有网友在社交媒体爆料称,自己遭遇了类似的情况:其在上班时间向招聘软件投递了简历,随后被领导约谈并遭到辞退。

2022-02-15
证券之星
Why's our monitor labelling this an incident or hazard?
The described employee behavior monitoring system qualifies as an AI system because it analyzes complex employee online behavior data to infer intentions such as job-seeking and work efficiency. The system's use has directly led to harm: an employee was dismissed after the system detected job application activity, which constitutes a violation of privacy and potentially labor rights. The event thus meets the criteria for an AI Incident due to realized harm (privacy infringement and employment impact) caused by the AI system's use. The article also discusses legal and societal responses, but the primary focus is on the incident itself and its consequences.
Thumbnail Image

投简历、登陆求职网站被公司监控?深信服官网已检索不到"行为感知系统BA"产品

2022-02-13
新浪财经
Why's our monitor labelling this an incident or hazard?
The behavior awareness system described is an AI system that analyzes user behavior data to generate outputs such as predictions of employee turnover and work efficiency. Its use by companies to monitor employees' activities, including job applications and chat content, directly impacts employees' privacy and labor rights. The article highlights concerns and legal opinions about potential violations of personal information and privacy rights, indicating realized harm or at least ongoing harm. The AI system's role is pivotal in enabling this intrusive monitoring. Hence, this event meets the criteria for an AI Incident due to violations of human rights and labor rights caused by the AI system's use.
Thumbnail Image

这套监控系统让打工人颤抖:离职倾向、摸鱼通通都能被监测,争议出现后产品页面已404

2022-02-14
新浪财经
Why's our monitor labelling this an incident or hazard?
The described system involves AI through behavior modeling and risk analysis based on extensive data mining of employee online activities. The system's deployment has directly led to privacy concerns and potential violations of employee rights, which are harms under the AI Incident definition (violations of human rights and labor rights). The controversy and public backlash, along with the product page removal, indicate that harm has materialized or is ongoing. Hence, this is not merely a potential risk or complementary information but an AI Incident.
Thumbnail Image

知乎否认安装和使用过"行为感知系统"

2022-02-14
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that analyzes employee behavior to predict departure risk, which is a direct use of AI for surveillance and behavioral inference. The system's deployment in companies constitutes a violation or potential violation of employee privacy rights, a breach of fundamental rights protected by law. The article reports actual use of this system by multiple companies, causing realized harm to employees' privacy and personal information rights. Although Zhihu denies using the system, the system's existence and use elsewhere is confirmed, and the harm is occurring. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights (privacy and personal data protection). The article also includes legal expert opinions confirming the potential for rights violations. Hence, the event is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

起底深信服的"员工监控"生意

2022-02-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (employee behavior monitoring and analysis system) developed and used by 深信服. The system analyzes employee internet behavior and job search activities to predict turnover risk, which is a clear AI application involving data inference and prediction. The use of this system has directly led to privacy violations and labor rights infringements, as employees are monitored without consent and sometimes dismissed based on the system's outputs. This constitutes a violation of human rights and labor rights under the framework. Hence, this is an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

深信服可监测员工跳槽倾向系统引争议 律师:若

2022-02-13
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The system described is an AI system as it analyzes complex behavioral data to infer employee turnover risk. The use of this system has directly led to harm, including employee dismissal after monitoring their job-seeking behavior, which constitutes harm to individuals' rights and privacy. The event also involves potential violations of personal information protection laws, which are legal obligations protecting fundamental rights. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use and the legal rights violations involved.
Thumbnail Image

离职倾向分析监控引发争议:可提前了解员工的跳

2022-02-12
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The described system qualifies as an AI system because it analyzes behavioral data to infer resignation intentions, a predictive task typical of AI. The event involves the use of this AI system in a way that has led to alleged violations of personal information rights and privacy, which are human rights concerns. The legal case and penalties for data scraping further confirm harm has occurred. Therefore, this event meets the criteria for an AI Incident due to violations of rights and harm to individuals caused directly or indirectly by the AI system's use and associated data practices.
Thumbnail Image

神奇系统可监测员工离职倾向,这家上市公司的产

2022-02-14
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The system described involves AI-like capabilities in analyzing large volumes of employee online behavior data to infer intentions and risks, which fits the definition of an AI system. The use of this system to monitor employees' private online activities and infer their resignation intentions raises concerns about violations of labor rights and privacy, which are human rights issues. Since the system is actively used to monitor employees and potentially impacts their rights, this constitutes an AI Incident due to violation of human rights and labor rights through the use of AI for intrusive employee surveillance and risk assessment.
Thumbnail Image

“监控员工离职倾向系统”引争议

2022-02-15
华商网
Why's our monitor labelling this an incident or hazard?
The system described is an AI system because it performs behavior analysis and prediction based on collected data to infer employees' departure intentions. Its use has directly led to harm: employees have been dismissed based on the system's outputs, and their privacy has been violated without consent, constituting a breach of labor and personal rights. The controversy and legal concerns further support the classification as an AI Incident. The event is not merely a potential risk (hazard) or complementary information, as actual harm has occurred and is documented. Therefore, the classification is AI Incident.
Thumbnail Image

A controversial Chinese tech system can predict whether employees are about to resign -- by spying on their online activities

2022-02-15
Insider
Why's our monitor labelling this an incident or hazard?
The system qualifies as an AI system because it analyzes online behavior data to infer resignation risk, which involves sophisticated data processing and prediction. The use of this AI system has directly led to harm in the form of employee dismissal and widespread privacy violations, constituting a breach of labor rights and personal privacy. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm to individuals' rights and well-being.
Thumbnail Image

China tech firm develops controversial resignation forecasting system

2022-02-15
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The system qualifies as an AI system because it uses data analytics and behavioral prediction to infer resignation intentions from online activities, which involves sophisticated data processing and prediction beyond standard software. The use of this AI system has directly led to harm, including violations of personal privacy and labor rights, as evidenced by the firing of an employee based on the system's outputs. This constitutes a violation of human rights and labor rights under applicable law, fulfilling the criteria for an AI Incident. The controversy and legal context further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

China tech firm develops controversial IT system that can predict whether an employee is about to resign

2022-02-15
The Star
Why's our monitor labelling this an incident or hazard?
The system qualifies as an AI system because it performs data analytics and prediction of employee behavior based on complex input data (online activities). The event involves the use of this AI system by companies to monitor employees and make employment decisions, which has directly led to harm (firing an employee) and raises concerns about privacy violations and labor rights breaches. The involvement of the AI system in causing these harms is direct and material. Hence, this event meets the criteria for an AI Incident due to violations of human rights and labor rights caused by the AI system's use.
Thumbnail Image

AsiaOne

2022-02-15
AsiaOne
Why's our monitor labelling this an incident or hazard?
The system qualifies as an AI system because it uses data analytics and predictive modeling to infer employees' resignation intentions from their online behavior. The use of this system has directly led to harm, as evidenced by the firing of an employee based on the AI's output, constituting a violation of labor rights and personal privacy. The controversy and public debate further highlight the societal impact and potential legal breaches under China's Personal Information Protection Law. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

A controversial Chinese tech system can predict whether employees are about to resign -- by spying on their online activities

2022-02-15
Business Insider
Why's our monitor labelling this an incident or hazard?
The system qualifies as an AI system because it uses data analysis and predictive algorithms to infer employees' resignation risk from their online behavior. The use of this AI system has directly led to harm in the form of violations of labor rights and privacy, as employees are monitored without consent and face adverse employment actions based on AI-generated predictions. This constitutes a breach of fundamental labor rights and privacy protections, fitting the definition of an AI Incident under violations of human rights or labor rights. The article reports realized harm, not just potential risk, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China tech firm develops controversial IT system that can predict whether an employee is about to resign

2022-02-16
IntellAsia
Why's our monitor labelling this an incident or hazard?
The system uses AI to analyze employee behavior and predict resignation intentions, which is a clear AI system involvement. The use of this system has directly led to harm, as evidenced by an employee being fired based on the system's output, indicating a violation of labor rights and personal privacy. The controversy and public debate further highlight the human rights concerns. Although the extent of deployment is unclear, the realized harm to at least one individual and the system's role in employment decisions meet the criteria for an AI Incident under violations of human rights and labor rights.
Thumbnail Image

知乎傳裁員 監控離職傾向系統曝光 - 20220213 - 中國

2022-02-12
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the behavior monitoring system uses AI to analyze employee online activities to predict resignation tendencies. The system's use directly impacts employee privacy and labor rights, constituting a violation of fundamental labor rights due to intrusive surveillance without clear consent. This constitutes a breach of obligations under applicable law protecting labor rights, thus meeting the criteria for an AI Incident. The harm is realized as employees are monitored in a way that infringes on their rights and potentially affects their employment conditions.
Thumbnail Image

如此「科技創新」,還有底線嗎?(組圖) - 時評 - 魚眼觀察

2022-02-15
看中国
Why's our monitor labelling this an incident or hazard?
The described system is an AI system performing behavioral analysis and prediction to infer employee turnover intentions. Its use involves processing sensitive personal data without informed consent, which violates legal protections for personal information and labor rights. The article indicates that this practice is widespread and ongoing, causing direct harm to employees' privacy and rights. Therefore, this event qualifies as an AI Incident due to the direct violation of human rights and labor rights caused by the AI system's use.