AI-Generated 'Skill' Clones Lead to Job Losses and Privacy Risks in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems like '同事.skill' are being used to distill employees' work habits and personalities into digital 'skills,' enabling companies to automate roles and leading to layoffs and privacy violations. These AI tools have caused job losses, disrupted career paths, and raised legal and ethical concerns in China.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems that replicate human skills and knowledge to replace employees, leading to realized harms such as mass layoffs, reduced hiring of junior staff, and systemic impacts on career development and labor rights. It also references an AI malfunction causing a major AWS outage, illustrating direct harm from AI system malfunction. The AI systems are central to these harms, fulfilling the criteria for an AI Incident. The article is not merely a general discussion or a complementary update but documents ongoing and realized harms caused by AI use in workforce management and automation.[AI generated]
AI principles
FairnessPrivacy & data governance

Industries
Business processes and support services

Affected stakeholders
Workers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Human resource management

AI system task:
Goal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

我的同事被炼化成 Skill 了

2026-04-05
爱范儿
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems that replicate human skills and knowledge to replace employees, leading to realized harms such as mass layoffs, reduced hiring of junior staff, and systemic impacts on career development and labor rights. It also references an AI malfunction causing a major AWS outage, illustrating direct harm from AI system malfunction. The AI systems are central to these harms, fulfilling the criteria for an AI Incident. The article is not merely a general discussion or a complementary update but documents ongoing and realized harms caused by AI use in workforce management and automation.
Thumbnail Image

合规科技 同事.skill 爆火,AI 正在吃掉职场经验?

2026-04-03
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that generate "skills" by processing workplace data, which is a clear AI system use. The concerns raised relate to potential labor rights violations and job displacement, which are recognized harms under the framework. However, the article does not report a concrete AI Incident where harm has materialized, nor does it describe a specific AI Hazard event with plausible imminent harm. Instead, it discusses legal rulings, societal reactions, and the evolving discourse on AI's impact on labor, which fits the definition of Complementary Information as it provides updates and context on AI's societal and governance implications without describing a new primary harm event.
Thumbnail Image

"同事.skill"带来的"职场鬼故事",折射出哪些法律问题 - 21经济网

2026-04-05
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that processes and distills personal work data into callable AI skills, which involves AI system development and use. It does not report a realized harm incident but discusses credible risks of harm to labor rights, intellectual property, and personal dignity due to the AI system's capabilities and applications. The concerns about ownership, privacy, and control over the distilled knowledge indicate potential violations of rights and labor protections if unregulated. Hence, the event fits the definition of an AI Hazard, as it plausibly could lead to AI incidents involving violations of human rights and labor rights in the future. It is not Complementary Information because the article's main focus is on the emerging legal and ethical issues posed by the AI system, not on responses or updates to past incidents. It is not Unrelated because the AI system and its implications are central to the discussion.
Thumbnail Image

21视频 "同事.skill"职场故事折射出这些法律问题

2026-04-05
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article centers on the conceptual and legal implications of AI systems that can replicate human work skills, focusing on potential future issues regarding labor rights and intellectual property. It does not describe any realized harm or direct misuse of the AI skill, nor does it report a specific event where the AI system caused injury, rights violations, or other harms. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides complementary information about societal and legal considerations related to AI's impact on labor and knowledge ownership.
Thumbnail Image

大厂只需要Token,不需要活人-钛媒体官方网站

2026-04-05
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems that extract and replicate employees' skills to replace human workers, leading to actual job losses and emotional harm to individuals. It describes the use of AI in workplace automation causing layoffs and the resulting social and psychological harms. These constitute direct harms to people (employment and livelihood), fitting the definition of an AI Incident. The discussion of universal basic income experiments is complementary information providing context on societal responses to these harms. Therefore, the primary event described is an AI Incident due to realized harm from AI-driven job displacement and associated human impacts.
Thumbnail Image

"同事.skill"带来的"职场鬼故事" 折射出哪些法律问题

2026-04-05
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the "同事.skill" AI skill) that uses AI to distill and automate personal work experience and behavioral patterns. While no direct harm has yet occurred, the discussion centers on the plausible risks and legal challenges arising from the use of such AI systems, including potential violations of labor rights, intellectual property rights, and personal privacy. The article does not report an actual incident but highlights credible concerns about future harms that could arise from the development and use of this AI technology. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its implications are the main focus.
Thumbnail Image

同事.skill的尽头,是把人生外包给AI

2026-04-05
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems that ingest employee data to create AI 'skills' that perform their work, leading to layoffs and reduced hiring, which is a violation of labor rights and causes harm to individuals and communities. It also mentions an AI programming assistant malfunction at Amazon causing a 13-hour AWS outage, a direct operational harm. These harms are directly linked to the development, use, and malfunction of AI systems. The article also discusses broader societal impacts and systemic risks, confirming the presence of realized harm rather than just potential harm. Hence, the event fits the definition of an AI Incident.
Thumbnail Image

蒸馏那些被优化的同事,让他们陪你赛博永生

2026-04-05
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems that ingest employees' communications and documents to generate AI skills that can replace human workers, leading to layoffs and reduced hiring of junior staff. This use of AI directly causes harm to labor rights by displacing workers and closing career development paths, fulfilling the criteria for an AI Incident under violations of labor rights and harm to communities. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in this harm. The article also highlights systemic issues such as loss of human judgment and expertise cultivation, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

大厂只需要Token,不需要活人

2026-04-05
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of AI tools that replicate human skills and automate work tasks, which have directly led to job losses and worker anxiety. The layoffs and job displacement described are direct harms to individuals' livelihoods and well-being, fitting the definition of harm to persons or groups (a). The use of AI to replace human workers and the resulting social and psychological impacts constitute an AI Incident because the AI system's use has directly led to realized harm. The discussion of universal basic income experiments is complementary information providing context on societal responses to these harms. Therefore, the main event described is an AI Incident due to realized harm from AI-driven job displacement and associated social consequences.
Thumbnail Image

同事.skill 的尽头,是把人生外包给 AI

2026-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate "skills" from employees' data to replace their work, directly leading to job losses and disruption of career pathways, which constitutes harm to people and communities. The article describes actual realized harms such as layoffs and system failures caused by AI tools, and the broader societal consequences of AI replacing human roles. The AI system's development and use are central to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The detailed discussion of realized impacts and examples confirms this classification.
Thumbnail Image

前任、老板、同事...全部token化!

2026-04-05
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that create digital personas from personal data, which clearly qualifies as AI system involvement. However, the article does not describe any realized harm such as injury, rights violations, or disruption caused by these AI Skills. It also does not report any credible or imminent risk of harm, but rather presents the technology as a novel tool with potential emotional and social effects. Therefore, the event does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about emerging AI applications and their societal implications, fitting the definition of Complementary Information.
Thumbnail Image

当同事、老板、前任都被做成 Skill:人类正在被重新定价_手机网易网

2026-04-05
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems that process personal data to create digital replicas of individuals' skills and personalities, which are then used in various contexts. This use of AI directly involves the development and deployment of AI systems that impact individuals' privacy and rights, constituting a violation of human rights or legal protections. The ethical concerns and risks of misuse, data ownership, and unauthorized replication of personal traits indicate realized or imminent harm. Hence, this qualifies as an AI Incident due to the direct involvement of AI systems causing or enabling harm related to rights violations and privacy breaches.
Thumbnail Image

「同事.Skill」冲上热搜,离职同事已被炼化!_手机网易网

2026-04-05
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating digital clones of former employees by processing their communication records and behavioral data. The AI system's use directly leads to potential and ongoing harms, including violations of privacy, labor rights, and personal autonomy, as the digital clones continue to perform work tasks without the original person's consent. These harms fall under violations of human rights and labor rights, fulfilling the criteria for an AI Incident. The article also discusses the societal and ethical implications, but the primary focus is on the realized harm caused by the AI system's deployment and use, not just potential future harm or complementary information.
Thumbnail Image

2026-04-08
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as distilling employee data into a digital skill that can perform work tasks, which is currently in use in a company setting. The use of this AI system has directly led to realized impacts on employees' roles and raises legal and ethical concerns about personal data rights and employment effects. These constitute violations of personal information rights and harm to labor rights, fitting the definition of an AI Incident. The article also discusses broader societal impacts and responses, but the primary focus is on the realized use and consequences of the AI system, not just potential future harm or complementary information.
Thumbnail Image

AI不只搶你工作、還能變成你?「同事.skill」引爆數位分身與職場焦慮

2026-04-09
工商時報
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that models and replicates human work behavior by analyzing personal and organizational data, which fits the definition of an AI system. It discusses the use of this system in ways that could lead to violations of privacy, intellectual property, and labor rights, as well as harm to the value of individual labor. Although no actual incident of harm is reported, the potential for such harms is credible and clearly articulated, meeting the criteria for an AI Hazard. The article does not report a realized harm (incident) nor is it merely complementary information or unrelated news; it focuses on the plausible risks and societal implications of this AI technology.
Thumbnail Image

下午察:同事.skill走红,谁是下一个被蒸馏的人?

2026-04-07
早报
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the project uses AI to generate digital colleagues based on real employee data. The use of this AI system directly leads to concerns about harm to individuals and communities, such as privacy violations, potential job displacement, and ethical issues related to data usage and consent. These concerns fall under violations of rights and harm to communities, which are recognized harms in the AI Incident definition. Since the AI system is already deployed and used, and the harms are occurring or imminent, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

王俊 "同事.skill"带来的"职场鬼故事",折射出哪些法律问题

2026-04-07
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system that extracts and encapsulates personal work experience into AI skills, which fits the definition of an AI system. The discussion centers on the use and development of this AI system and its potential legal and labor-related consequences. However, there is no report of direct or indirect harm occurring yet, nor a specific event where harm was caused or narrowly avoided. Instead, the article focuses on the legal questions, labor rights, intellectual property, privacy concerns, and broader employment impacts raised by this technology. This aligns with the definition of Complementary Information, as it provides important context, analysis, and governance considerations related to AI without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

继"同事.skill"走红,周鸿回应"把自己炼成AI分身":这才是数字分身的正确未来-新闻频道-和讯网

2026-04-10
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (digital avatars based on AI models) used in cybersecurity expert replication and automation. However, the article does not report any realized harm, injury, rights violation, or disruption caused by these AI systems. Instead, it presents a positive development and strategic response to expert scarcity in cybersecurity. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and insight into AI development and governance responses in the cybersecurity domain, enhancing understanding of AI's evolving role and potential.
Thumbnail Image

张雪峰变"AI技能包"引争议

2026-04-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system that uses a skill package to emulate Zhang Xuefeng's cognitive style for educational advice, fulfilling the AI System criterion. However, there is no evidence of realized harm such as injury, rights violations, or disruption, nor is there a credible risk of such harm described. The controversy is ethical and reputational, and the company is investigating, which aligns with societal and governance responses to AI developments. Hence, the event does not meet the threshold for AI Incident or AI Hazard but fits the definition of Complementary Information.
Thumbnail Image

应该担心AI"炼化"打工人吗

2026-04-08
中国经济网
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the company uses AI models trained on employee data to create digital 'colleagues' that mimic human work behavior. The event stems from the use of AI systems in a novel and sensitive way. Although the AI system has not yet caused direct harm, the article highlights plausible future harms including privacy violations, labor rights infringements, and ethical dilemmas. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if misused or unregulated. The article also discusses legal and ethical debates and societal implications, but these are part of the hazard context rather than a resolved incident.
Thumbnail Image

关注 | 张雪峰被AI"复活",合法吗?

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the AI skill package uses a cognitive model to generate responses mimicking Zhang Xuefeng. The event centers on the use of this AI system and the legal and ethical implications of using a deceased person's identity without authorization. Although the article discusses possible violations of personality rights and privacy, it does not report any realized harm or legal rulings yet. The concerns are about potential infringements and ethical issues that could plausibly lead to harm or legal violations. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and ethical harms, but no confirmed incident has occurred yet.
Thumbnail Image

张雪峰变"AI技能包"?

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the "张雪峰.skill" AI skill package) that generates advice based on a modeled cognitive framework. However, there is no evidence or report of any harm caused by the AI system's use. The controversy is ethical and reputational rather than a direct or indirect harm as defined by the AI Incident criteria. The company's investigation and public reactions are governance and societal responses to the AI deployment, fitting the definition of Complementary Information. There is no plausible future harm clearly indicated that would qualify this as an AI Hazard, nor is there any realized harm to classify it as an AI Incident.
Thumbnail Image

张雪峰变"AI技能包"?

2026-04-10
新浪财经
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (the "张雪峰.skill" AI skill package) that generates advice by simulating a person's cognitive style. However, there is no evidence or report of any harm caused by this AI system, such as injury, rights violations, or other significant harms. The concerns raised are ethical and reputational, not concrete harms. The company's investigation and public discussion are responses to the AI system's deployment. Thus, the event fits the definition of Complementary Information, providing supporting context and societal response rather than describing an AI Incident or Hazard.
Thumbnail Image

张雪峰变"AI技能包"引争议,记者实测发现安装难但拟真度高,持股公司回应_手机网易网

2026-04-09
m.163.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (an AI skill package integrated with an AI assistant) that generates outputs influencing users' decisions. While there is public controversy about the ethical implications of simulating a deceased person's persona, no actual harm (such as misinformation causing harm, rights violations, or other damages) is reported. The company's response indicates ongoing investigation but no confirmed incident. Hence, the event does not meet the criteria for an AI Incident or AI Hazard but rather is a case of complementary information about AI use and societal reaction.
Thumbnail Image

当你把同事做成"数字分身":谁在越界,谁在缺席?

2026-04-11
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate digital replicas of individuals based on their data, which is explicitly described. The use of these AI 'skills' has led to concerns about infringement of personality rights, unauthorized use of a deceased person's identity, and potential reputational harm. These are violations of human rights and personality rights, fitting the harm category (c) under AI Incident. The article details actual use and public controversy, indicating realized harm rather than just potential risk. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蒸馏:全员skill的职场恐怖故事-钛媒体官方网站

2026-04-10
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems that extract and simulate human work skills and personas, which fits the definition of an AI system. The use of these AI-generated digital avatars to replace or supplement human workers is ongoing or imminent, implying a plausible risk of harm to labor rights, personal privacy, and autonomy. However, the article does not report a concrete incident of harm having already occurred but rather discusses the emerging technology, its deployment, societal reactions, and potential consequences. Therefore, it fits best as an AI Hazard, as it plausibly could lead to AI incidents involving violations of labor rights and personal data misuse, but no direct harm is yet documented.
Thumbnail Image

壹快评|AI"炼化"同事来了?别被无厘头的技术焦虑带乱阵脚

2026-04-11
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI-related technology (AI 'distillation' of employee digital data) but does not describe any actual harm or incident caused by the AI system. Instead, it focuses on public anxiety, misconceptions, and potential legal risks, which are not realized harms but concerns and discussions about possible issues. There is no direct or indirect harm reported, nor a credible imminent risk of harm from the AI system's use as described. Therefore, this article is best classified as Complementary Information, providing context, clarifications, and societal responses to an AI-related topic rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

张雪峰遭AI"数据解剖"复活,实现数字永生!背后真相发人深省_手机网易网

2026-04-10
m.163.com
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described and its use directly leads to harm in the form of violation of privacy, unauthorized use of personal data, and disrespect to the deceased and their family, which are breaches of legal and ethical rights. The AI's role is pivotal as it enables the digital resurrection and interaction with the deceased's persona without consent. The article also discusses potential future harms from similar AI misuse, but the current unauthorized use and infringement already constitute realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

24岁工程师4小时写出"同事.skill"引爆开源社区,开发者竞相蒸馏"前任""导师"乃至"自己"_手机网易网

2026-04-10
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system that processes and distills personal and workplace data to generate AI skills. Although no direct harm is reported, the discussion of privacy risks, potential unauthorized use of personal data, and legal uncertainties indicates a credible risk of harm to individuals' rights and privacy. The AI system's use could plausibly lead to violations of personal rights and privacy, fitting the definition of an AI Hazard. The article also includes expert commentary on the need for legal and governance responses, reinforcing the potential for future harm rather than describing a realized incident. Thus, the classification as AI Hazard is appropriate.