Kimi AI Model Leaks User Resume Data, Causing Privacy Breach in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Kimi large language model, developed by Moonshot AI, mistakenly disclosed a user's private resume—including name, phone, and work history—to another user during a routine task. The leaked data was verified as authentic, raising serious concerns about AI data isolation and privacy protection. Legal action is underway.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Kimi, a large language model) that, during normal use, disclosed another user's private and sensitive personal information without authorization. The harm is realized as personal data leakage and privacy violation, which is a breach of legal obligations protecting personal information and fundamental rights. The AI system's malfunction or engineering failure (e.g., data isolation failure, session contamination) directly caused this harm. The incident is not hypothetical or potential but has already occurred and caused harm, so it is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

"我的简历竟在大模型上'裸奔'"!个人隐私为何遭"开盒"式泄露?

2026-04-23
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Kimi, a large language model) that, during normal use, disclosed another user's private and sensitive personal information without authorization. The harm is realized as personal data leakage and privacy violation, which is a breach of legal obligations protecting personal information and fundamental rights. The AI system's malfunction or engineering failure (e.g., data isolation failure, session contamination) directly caused this harm. The incident is not hypothetical or potential but has already occurred and caused harm, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

莫名收到陌生人的电话号码、工作经历、核心业绩!这个爆款大模型竟"护不住"用户隐私 2026-04-23 20:09

2026-04-23
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Kimi translation model) whose malfunction or improper data handling directly led to the unauthorized disclosure of personal information, constituting a violation of privacy rights and legal obligations. The harm is actual and significant, as sensitive personal data was exposed to unrelated users. The explanation by the company attributing the issue to 'AI hallucination' is legally insufficient, and experts suggest it is due to engineering failures such as data isolation or session management errors. Therefore, this event qualifies as an AI Incident due to realized harm to individual privacy and legal violations linked to the AI system's use.
Thumbnail Image

"我的简历竟在大模型上'裸奔'",个人隐私为何遭"开盒"式泄露?

2026-04-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Kimi large language model) whose malfunction (data isolation failure and session contamination) caused the unauthorized disclosure of sensitive personal information. This breach constitutes a violation of personal privacy rights and relevant laws, fulfilling the criteria for harm to individuals (harm category (c): violations of human rights or breach of legal obligations protecting fundamental rights). The harm is realized, not just potential, as the leaked information was confirmed authentic and has led to legal action. The AI system's role is pivotal as the leak occurred through its operation and data handling. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Kimi被曝泄露用户隐私!误将他人简历发给用户

2026-04-21
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Kimi) that malfunctioned by erroneously outputting another user's private resume data to a different user, directly causing harm through privacy violation. The AI system's 'hallucination' or error in data handling led to the unauthorized disclosure of sensitive personal information, which is a breach of fundamental rights and data protection laws. The harm is realized and not merely potential, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kimi被曝泄露个人隐私 润色简历竟遭"背刺"?

2026-04-22
千龙网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Kimi) used for document processing and resume polishing. The system malfunctioned by sending one user's private information to another user, causing a privacy violation. This is a direct harm resulting from the AI system's malfunction and use, meeting the criteria for an AI Incident. The discussion of 'AI hallucination' and 'hash collision' supports the malfunction aspect. The privacy breach is a clear violation of user rights and confidentiality, which is a recognized harm under the framework.
Thumbnail Image

细思极恐!某国产大模型泄露用户隐私,并随意将隐私信息发给其他人_手机网易网

2026-04-21
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Kimi large language model) whose malfunction or data handling led to the unauthorized disclosure of personal private information, which is a clear violation of privacy rights and a breach of obligations under applicable law protecting fundamental rights. The harm has already occurred as the private data was exposed and verified to be accurate. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to a violation of human rights (privacy).
Thumbnail Image

9点1氪丨Kimi被曝泄露用户真实简历;马斯克花4000多亿买下00后公司;世界杯决赛门票转手价近230万美元-36氪

2026-04-25
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Kimi translation AI) whose malfunction or misuse led to the direct leakage of personal data, constituting harm under the category of violation of human rights and legal obligations (privacy and data protection). The AI system's role is pivotal as it generated and returned the sensitive information. The harm is realized and not hypothetical. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

9点1氪丨Kimi被曝泄露用户真实简历;马斯克花4000多亿买下00后公司;世界杯决赛门票转手价近230万美元-36氪

2026-04-25
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Kimi translation AI) whose malfunction or misuse directly led to the unauthorized disclosure of personal data, constituting a violation of personal information protection laws. The harm is realized and significant, involving breach of privacy and legal obligations. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly caused harm to individuals' rights and data security.
Thumbnail Image

翻译PPT莫名收到陌生人完整简历!Kimi估值"狂飙" 却"护不住"用户隐私

2026-04-25
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Kimi) whose malfunction or misuse has directly led to the unauthorized disclosure of personal information, causing harm to individuals' privacy rights. The presence of an AI system is clear, and the harm (personal data leak) has materialized. The company's explanation of 'AI hallucination' is legally insufficient, and experts confirm the issue stems from engineering failures related to data management. This fits the definition of an AI Incident due to violation of personal rights and privacy, which is a breach of applicable law protecting fundamental rights.
Thumbnail Image

86天态度反转!手握百亿现金的杨植麟,为何"改口"赴港IPO?_腾讯新闻

2026-04-29
QQ新闻中心
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Kimi translation AI) malfunctioning by exposing one user's private data to another user, which is a direct violation of privacy rights and data protection laws. The harm is realized and directly linked to the AI system's malfunction (data isolation failure). This fits the definition of an AI Incident as it involves a breach of obligations under applicable law intended to protect fundamental rights (privacy). The article also discusses broader business and market context but the core harmful event is the data leak caused by the AI system's malfunction.