Company Uses AI to Clone Departed Employees, Raising Legal and Ethical Concerns in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Several Chinese companies have used AI to create digital clones of former employees by training models on their work documents and communications. These AI avatars continue performing tasks after the employees leave, sometimes without explicit consent, sparking public outcry and legal warnings over privacy, intellectual property, and labor rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is clearly involved as the company uses AI to create a digital avatar that performs tasks based on training data from the employee. The event involves the use of AI (use phase) with the employee's consent, so no direct harm has occurred. However, legal commentary highlights potential privacy and personal information rights violations if consent is not obtained, indicating plausible future harm. Therefore, this event qualifies as an AI Hazard because it could plausibly lead to an AI Incident if consent is not properly managed or if the system is misused. It is not an AI Incident because no harm has yet materialized, nor is it Complementary Information or Unrelated as the main focus is on the AI system's use and potential legal risks.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Business processes and support services

Affected stakeholders
WorkersBusiness

Harm types
Human or fundamental rightsReputational

Severity
AI hazard

Business function:
Human resource management

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

员工离职后被炼成AI数字人继续打工 舆论炸了:合法吗

2026-04-03
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the company uses AI to generate digital clones of employees based on their data. The use of this AI system without the employees' explicit consent constitutes a violation of rights, specifically privacy and intellectual property rights, which are protected under applicable law. The article indicates that this practice is likely illegal and harmful to the employees' rights, thus constituting an AI Incident due to violations of human rights and legal obligations. The harm is realized as the AI system is actively used to replace or mimic the departed employees' work without authorization.
Thumbnail Image

离职员工被做成数字分身继续工作 已获本人同意处于内测阶段

2026-04-06
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the company uses AI to create a digital avatar that performs tasks based on training data from the employee. The event involves the use of AI (use phase) with the employee's consent, so no direct harm has occurred. However, legal commentary highlights potential privacy and personal information rights violations if consent is not obtained, indicating plausible future harm. Therefore, this event qualifies as an AI Hazard because it could plausibly lead to an AI Incident if consent is not properly managed or if the system is misused. It is not an AI Incident because no harm has yet materialized, nor is it Complementary Information or Unrelated as the main focus is on the AI system's use and potential legal risks.
Thumbnail Image

"前同事"被蒸馏成了 Token,AI能否"偷走"职场经验?

2026-04-06
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that process and generate workplace skills from employee data, which is clearly AI-related. However, the article primarily addresses concerns about privacy, intellectual property, and potential future impacts rather than describing a concrete harm or violation that has already occurred. There is no direct or indirect evidence of realized harm such as privacy breaches, legal violations, or health impacts. Instead, it discusses plausible future risks and ethical debates, making it an AI Hazard scenario. Additionally, the article includes expert opinions and regulatory drafts addressing these concerns, which further supports the classification as a hazard rather than an incident or complementary information focused on responses to a past event.
Thumbnail Image

公司用AI复刻离职员工继续工作,为何让人觉得毛骨悚然

2026-04-07
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to replicate a former employee's work persona and tasks, which fits the definition of an AI system. The use of this AI system is ongoing and has been consented to by the employee, but the article emphasizes the potential for serious harm to personal rights, dignity, and labor protections if such practices become widespread or are used without consent. No actual legal violations or direct harms have been reported yet, but the plausible future harms are significant and credible, including violations of personality rights and labor rights. Thus, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI and its societal implications.
Thumbnail Image

离谱!员工离职后,被炼化成AI数字人继续"打工"

2026-04-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the project 'colleague-skill' uses AI models trained on personal and work data to generate a digital avatar of the employee. The use of this AI system after the employee's departure to continue work tasks directly implicates the AI system's use. The event raises concerns about violations of labor rights and possibly privacy rights, as the employee's data is used to continue work without their active consent or presence, which can be considered a violation of human rights or labor rights under applicable law. Therefore, this constitutes an AI Incident due to the violation of rights caused by the AI system's use.
Thumbnail Image

离职员工被做成数字分身继续工作 AI尝试引发关注

2026-04-06
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to create digital avatars of former employees to perform work tasks. The use is experimental and internal, with consent from the involved employee, so no direct harm has occurred. However, the lawyer's warning about potential privacy violations if data is used without consent indicates a plausible risk of legal and personal rights harm in similar cases. Since no actual harm or incident is reported, but there is a credible potential for harm related to privacy and personal data rights, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

未经同意用AI复刻员工最高可判7年 职场伦理新挑战

2026-04-07
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to replicate employees digitally, which directly implicates personal information protection laws and labor rights. The AI system's use in processing and continuing work based on personal data without clear, ongoing consent poses a direct risk of harm to individual rights and privacy. The article mentions legal consequences (up to 7 years imprisonment) for unauthorized use, indicating that such use can lead to violations of applicable laws protecting fundamental and labor rights. Although the AI replicas are not yet widely deployed externally, the current use and associated risks meet the criteria for an AI Incident due to realized or ongoing harm related to rights violations and legal breaches.
Thumbnail Image

离职员工被做成数字分身继续工作?公司员工回应:经离职同事同意了

2026-04-06
金羊网
Why's our monitor labelling this an incident or hazard?
An AI system (digital employee avatars) is explicitly involved, trained on personal data of employees. The use is with consent, so no direct or indirect harm has occurred yet. The legal warning highlights the potential for serious privacy violations and legal harm if such consent is absent or data is misused. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident (privacy violations, legal breaches) if mismanaged. It is not an AI Incident because no harm has materialized, nor is it merely Complementary Information or Unrelated, as the AI system's use and potential risks are central to the event.
Thumbnail Image

前同事"被蒸馏成了 Token,AI能否"偷走"职场经验?

2026-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article involves AI systems that process employee data to generate AI skills, which could plausibly lead to harms such as privacy violations, intellectual property infringements, and labor market disruptions. However, no specific realized harm or incident is described; the concerns are prospective and regulatory in nature. Therefore, the event fits the definition of an AI Hazard, as it outlines credible potential harms from AI use in workplace skill distillation and AI-driven job displacement, but does not document an actual AI Incident or harm that has already occurred.
Thumbnail Image

"前同事"被蒸馏成了 Token,AI能否"偷走"职场经验?

2026-04-05
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article primarily addresses potential and ongoing concerns about AI's use in distilling workplace skills and its impact on employment, privacy, and intellectual property. It references regulatory drafts and expert opinions, indicating societal and governance responses to these issues. There is no description of a specific AI Incident causing realized harm, nor a narrowly defined AI Hazard event with imminent risk. Therefore, the article fits best as Complementary Information, providing context, analysis, and updates on AI-related developments and their implications rather than reporting a concrete AI Incident or Hazard.
Thumbnail Image

"同事被炼化了"冲上热搜!网友:瘆得慌!_手机网易网

2026-04-07
m.163.com
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the company uses AI models trained on personal work data to simulate a former employee's work style. The use of AI here is in the development and deployment of a digital 'colleague' that can perform simple tasks. While the employee consented in this case, the practice raises significant concerns about privacy and personal data rights, which are fundamental rights. Since no actual harm or violation has been reported, but the potential for harm (privacy violations, labor rights issues, and ethical concerns) is credible and plausible, this event fits the definition of an AI Hazard rather than an AI Incident. The article also discusses societal reactions and legal interpretations, but these are contextual and do not constitute complementary information as the main focus is on the AI use and its implications.
Thumbnail Image

山东一公司将离职员工做成数字分身继续工作,在职员工回应:有点笨,还没对外使用,已获本人同意;律师提醒:未经同意最高可判7年_手机网易网

2026-04-07
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (digital replicas of employees) trained on personal data to perform work tasks. The use of personal information without proper consent constitutes a violation of personal information protection laws and potentially infringes on employee rights, which is a breach of applicable law protecting fundamental rights. Although the company claims consent was obtained and the system is in internal testing, the legal warning and public concern indicate that the AI system's use has already led to or risks leading to harm in terms of privacy violations and unauthorized data use. Therefore, this is classified as an AI Incident because the AI system's use has directly or indirectly led to violations of human rights and legal obligations.
Thumbnail Image

LOGO研究所|同事"被炼化了"成永生打工人

2026-04-08
China Digital Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system that has been developed and used to create digital replicas of employees to perform work tasks, which directly affects personal privacy and personality rights. The AI system's use has led to a breach of obligations intended to protect fundamental and labor rights, as the digital replication of employees' work style and communication without proper consent constitutes a violation. The harm is realized and ongoing, not merely potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"炼化同事"现象,体现了AI技术发展与劳动法律关系新老问题的交织。 [全文]

2026-04-08
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to replicate a former employee's work capabilities and personality, which directly relates to the use of AI. The article highlights that the company uses personal data and likeness of the employee, raising issues of consent, legality, and personality rights violations. These constitute violations of human rights and labor rights under the framework, fulfilling the criteria for an AI Incident. The harm is realized as the AI system's use infringes on personal and intellectual property rights and raises ethical concerns. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

有一说一|离职了还要被"炼化"成数字人,你真的愿意吗

2026-04-08
南方网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems trained on personal employee data to replace human workers, which directly relates to AI system use. Although the article does not report a concrete incident of harm, it clearly outlines the potential for violations of labor rights, privacy, and ethical breaches, which are harms under the AI Incident definition. Since the harm is not yet realized but plausibly could occur, this situation fits the definition of an AI Hazard rather than an AI Incident. The article primarily focuses on the potential risks and ethical implications rather than reporting an actual harm event.
Thumbnail Image

"炼化同事"免费打工 风险不容忽视 AI用工伦理需重视

2026-04-08
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system (AI digital avatars) is explicitly involved, used to replicate former employees' knowledge and personality to perform tasks. The event is in an internal testing phase with claimed consent, and no direct harm or legal violation has occurred yet. However, the article discusses plausible future risks related to labor rights, personal data, and personality rights violations if the AI use is not properly regulated or consented. Therefore, this situation fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident involving violations of rights or ethical harms. It is not an AI Incident because no harm has materialized yet, nor is it merely Complementary Information or Unrelated, as the focus is on the potential risks and ethical concerns of this AI use case.
Thumbnail Image

离职员工成AI数字人,看看专家怎么说

2026-04-09
大河网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI digital humans trained on employee data) and discusses their use and potential malfunction (errors by AI digital humans). However, it does not describe any actual harm or incident that has occurred; rather, it focuses on the potential risks, legal and ethical concerns, and societal implications of deploying such AI digital humans. This fits the definition of an AI Hazard, as the development and use of AI digital humans could plausibly lead to harms such as violations of rights, responsibility disputes, or workplace disruptions, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

离职了还能"赛博打工"?

2026-04-09
大洋网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replicate former employees' work, which could plausibly lead to violations of personal digital rights and intellectual property rights if misused or unregulated. Since no actual harm is reported but the concerns and potential risks are clearly articulated, this fits the definition of an AI Hazard rather than an AI Incident. The focus is on the plausible future harm and the need for clearer legal boundaries and protections, not on a realized harm event.
Thumbnail Image

新浪AI热点小时报丨2026年04月08日22时_今日实时AI热点速递

2026-04-08
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content primarily consists of general AI news, event reports, corporate strategies, and technology announcements without detailing any realized harm or credible imminent risk caused by AI systems. There is no mention of injury, rights violations, infrastructure disruption, or environmental harm linked to AI use or malfunction. Nor does it highlight a credible potential for such harm. Therefore, the article fits the category of Complementary Information, providing context and updates on AI developments rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

山东公司用离职员工AI分身工作引争议

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems trained on sensitive personal data of former employees to create digital avatars that perform work tasks, which is a clear AI system involvement. The use of these AI avatars has directly led to harms including violations of personal data protection laws, ethical exploitation of labor, psychological distress among current employees, and labor market disruption. These constitute violations of rights and harm to communities, fitting the AI Incident definition. The article also discusses legal risks and ethical conflicts, confirming the realized harm rather than just potential risk. Hence, the classification is AI Incident.
Thumbnail Image

AI复刻员工边界何在?

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the use of AI to replicate employees after departure, which involves AI systems and raises plausible risks of harm such as violations of personality rights and labor rights. However, no direct or indirect harm has been reported as having occurred yet. The discussion is about potential ethical and legal issues and the need for regulation, making it a plausible future risk rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI"蒸馏"离职员工?请把人当人看 | 锋面评论

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not report a concrete AI Incident or AI Hazard but rather provides a critical commentary on the potential harms and ethical challenges posed by AI practices such as 'skill distillation' of employees. There is no direct or indirect harm described as having occurred, nor a specific plausible future harm event detailed. The focus is on raising awareness and urging ethical consideration, which aligns with Complementary Information as it contributes to understanding the broader AI ecosystem and societal implications without reporting a new incident or hazard.
Thumbnail Image

新浪AI热点小时报丨2026年04月11日20时_今日实时AI热点速递

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves a physical attack on an individual closely associated with AI development, motivated by fears and anxieties about AI. Although the AI system did not malfunction or cause harm directly, the attack is a direct consequence of the societal impact of AI and the public's reaction to it. The harm (injury or threat to health) to the CEO is linked indirectly to AI through the social context. This fits the definition of an AI Incident as the AI system's development and societal impact have indirectly led to harm to a person.
Thumbnail Image

当"前同事"变成AI:技术进步不可越过伦理法律边界

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the societal and governance implications of AI digital replicas, emphasizing the need for ethical and legal safeguards. It does not describe a realized harm or incident but rather discusses the plausible risks and regulatory responses to emerging AI applications. Therefore, it fits the definition of Complementary Information, as it provides context, policy updates, and ethical considerations related to AI systems without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

把离职员工训练成AI数字人,让人"细思极恐"?

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replicate former employees digitally, which is an AI system use case. The article raises concerns about privacy violations and ethical issues, indicating potential for harm to personal rights and privacy. However, it does not describe any realized harm or incident where the AI system directly or indirectly caused injury, rights violations, or other harms. The discussion centers on the plausible future risks and societal/legal responses, making it an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI is central to the event.
Thumbnail Image

周鸿祎回应"把自己炼成AI分身":这才是数字分身的正确未来

2026-04-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (digital avatars based on AI models) used to replicate cybersecurity expertise and improve security operations. However, it does not report any harm, violation of rights, or disruption caused by these AI systems. The AI is used to enhance capabilities and address expert scarcity, with safeguards such as reliance on authorized data and no personal data commercialization. There is no mention of plausible future harm or incidents related to these AI systems. The main focus is on explaining the technology, its intended use, and strategic vision, which aligns with providing complementary information about AI developments and governance responses rather than reporting an incident or hazard.
Thumbnail Image

继"同事.skill"走红,周鸿祎回应"把自己炼成AI分身":这才是数字分身的正确未来

2026-04-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the use and development of AI systems as digital experts to augment cybersecurity capabilities, with no mention of injury, rights violations, or other harms. The AI systems are described as operating within legal and ethical boundaries, using authorized data, and aiming to improve security. The discussion is about the future potential and strategic deployment of AI digital avatars, not about any incident or hazard causing or plausibly leading to harm. Hence, this is best classified as Complementary Information, providing insight into AI ecosystem developments and governance perspectives rather than reporting an AI Incident or Hazard.
Thumbnail Image

新华智见|技能可以“炼化” 隐私权不能让渡

2026-04-28
лªÍøÉ½¶«ÆµµÀ
Why's our monitor labelling this an incident or hazard?
The article centers on the potential privacy risks and legal challenges posed by AI systems that generate digital employee models from personal work data. While it identifies plausible risks of privacy rights violations and misuse of personal data, it does not describe a concrete event where harm has occurred. The discussion is about the possible future harms and the need for regulatory measures, fitting the definition of an AI Hazard or Complementary Information. However, since the main focus is on raising awareness, legal and regulatory responses, and societal implications rather than reporting a specific incident or imminent hazard, it aligns best with Complementary Information.
Thumbnail Image

微博发布2025年ESG报告:筑牢安全与隐私防线 AI助力提升用户体验

2026-04-28
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used by Weibo for search reasoning, content moderation, and anti-fraud activities, indicating AI system involvement. However, it does not report any realized harm or incident resulting from AI malfunction or misuse. Instead, it highlights measures taken to prevent harm, such as privacy protections, multi-layered security, and AI governance frameworks. The focus is on ongoing management, improvements, and social responsibility, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

个人隐私不能被算法侵犯

2026-04-26
wlaq.gmw.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on the general problem of algorithmic privacy invasion and the regulatory challenges posed by AI systems but does not report any specific AI Incident or AI Hazard. It does not describe any realized harm or a particular event where AI caused or could plausibly cause harm. Instead, it discusses the broader societal context and the need for governance, which fits the definition of Complementary Information as it provides supporting context and highlights governance responses without detailing a new incident or hazard.
Thumbnail Image

美国企业2025年因隐私违规行为被罚款案例创纪录 - cnBeta.COM 移动版

2026-04-28
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI system causing direct or indirect harm, nor does it describe a particular event where AI use or malfunction led to realized harm. Instead, it provides an overview of regulatory responses and enforcement trends related to privacy violations in the AI era, including the formation of multi-state regulatory alliances and stricter legal frameworks. This constitutes complementary information about AI governance and societal responses rather than an AI incident or hazard.
Thumbnail Image

保护AI训练数据隐私的有效防御方案

2026-04-30
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The content centers on the plausible risks (privacy attacks) that AI systems face during training and the technical defenses to mitigate these risks. No actual privacy breach or harm has been reported; instead, the article provides research findings on potential vulnerabilities and effective countermeasures. Therefore, it describes a credible AI Hazard scenario where AI systems could plausibly lead to privacy harms if unprotected, but these harms have not materialized in this context. It is not an AI Incident because no realized harm is described, nor is it Complementary Information since it is not updating or responding to a specific past incident. It is not unrelated because it clearly involves AI systems and their privacy risks.
Thumbnail Image

AI时代隐私保护仍可实现,但Proton CEO坦言有一件事令他夜不能寐

2026-04-30
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the CEO's reflections and concerns about AI privacy risks, especially the potential for AI agents to lose control and leak data, which is a plausible future harm but not a realized incident. It also covers Proton's strategies and product features aimed at addressing these risks, which qualifies as complementary information about societal and technical responses to AI privacy challenges. There is no description of an actual AI incident causing harm, nor a direct event of AI malfunction or misuse leading to harm. Hence, the article fits best as Complementary Information rather than an AI Incident or AI Hazard.