35 AI Chat Apps, Including Zhipu Qingyan and Kimi, Cited for Illegal Data Collection

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China’s National Cybersecurity Information Center reported that 35 mobile apps, including AI chatbots Zhipu Qingyan and Kimi, illegally collected users’ personal data beyond authorized scope or unrelated to their services. Detected on the Yingyongbao app store between April 16 and May 15, the violations prompted regulatory scrutiny.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly identifies AI applications (e.g., Kimi, Zhipu Qingyan) that have been found to collect personal information illegally or beyond authorized scope, which is a breach of personal information protection laws. The involvement of AI systems in these violations is clear, as these are AI-powered applications. The harm is realized, as users' personal data rights have been infringed. This fits the definition of an AI Incident under violations of human rights or breach of legal obligations protecting fundamental rights. The article also discusses regulatory responses and challenges but the primary event is the realized harm from illegal data collection by AI applications.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital security

Industries
Consumer servicesDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

涉嫌违规收集个人信息 Kimi等多款AI应用被通报__新快网

2025-05-22
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI applications (e.g., Kimi, Zhipu Qingyan) that have been found to collect personal information illegally or beyond authorized scope, which is a breach of personal information protection laws. The involvement of AI systems in these violations is clear, as these are AI-powered applications. The harm is realized, as users' personal data rights have been infringed. This fits the definition of an AI Incident under violations of human rights or breach of legal obligations protecting fundamental rights. The article also discusses regulatory responses and challenges but the primary event is the realized harm from illegal data collection by AI applications.
Thumbnail Image

5月21日投资避雷针:股价波动幅度较大 这只股票今起停牌核查

2025-05-21
东方财富网
Why's our monitor labelling this an incident or hazard?
The mention of AI applications illegally collecting and using personal information beyond user consent constitutes a violation of rights under applicable law protecting personal data, which fits the definition of an AI Incident. The stock price volatility and suspension for investigation do not directly involve AI harm, but the data privacy violations related to AI apps do. Therefore, the event qualifies as an AI Incident due to realized harm from AI misuse in personal data collection.
Thumbnail Image

美团、智谱、Kimi等卷入隐私风暴-36氪

2025-05-22
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI assistants and AI interactive products) involved in the illegal or unauthorized collection and use of personal information, which is a violation of privacy rights and applicable laws. The involvement of AI systems in collecting data beyond user consent or unrelated to business functions directly leads to harm in the form of privacy violations and breaches of legal obligations. The event is not merely a potential risk but a realized harm confirmed by official detection and notification, fulfilling the criteria for an AI Incident. The article also discusses regulatory responses and the broader context of AI privacy risks, but the primary focus is on the incident of privacy violations caused by AI applications.
Thumbnail Image

违规收集个人信息,智谱清言、Kimi等35款APP被通报

2025-05-21
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI assistants like Zhipu Qingyan and Kimi) involved in the incident. The harm is a violation of personal data protection laws and users' privacy rights, which falls under violations of human rights and legal obligations. The collection of personal information beyond authorized scope or unrelated to business functions is a direct misuse of AI applications leading to harm. Hence, this is an AI Incident as the AI systems' use has directly led to legal and rights violations.
Thumbnail Image

21:22 智谱清言、Kimi等被通报违法违规收集使用个人信息

2025-05-20
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The applications mentioned are AI-related and have been found to collect personal information illegally or beyond authorized scope, which is a breach of applicable laws protecting fundamental rights, specifically privacy rights. This directly leads to violations of human rights and legal obligations, fitting the definition of an AI Incident under violations of rights (c).
Thumbnail Image

最新!Kimi、智谱清言被通报非法收集使用个人信息 2025-05-20 23:11

2025-05-20
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The applications mentioned are AI systems (generative AI assistant and AI technology platforms). The illegal collection and use of personal information constitute a violation of applicable laws protecting fundamental rights, specifically privacy rights. Since the AI systems' development or use has directly led to breaches of legal obligations and users' rights, this qualifies as an AI Incident under the framework's definition of violations of human rights or breach of legal obligations protecting fundamental rights.
Thumbnail Image

5月21日投资避雷针:股价波动幅度较大 这只股票今起停牌核查

2025-05-21
China Finance Online
Why's our monitor labelling this an incident or hazard?
The article includes mention of AI applications involved in illegal data collection, which relates to AI system use and potential privacy concerns. However, it does not report any realized harm or legal actions resulting directly from these AI systems, nor does it describe any AI system malfunction or misuse causing harm. The main focus is on economic data, stock market movements, and company announcements, with AI-related information serving as background context. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

视频"智谱清言"、"Kimi"等35款移动应用存在违法违规收集使用个人信息被通报21视频

2025-05-21
21jingji.com
Why's our monitor labelling this an incident or hazard?
The involvement of AI applications in the illegal collection and misuse of personal information directly breaches applicable laws protecting fundamental rights related to privacy and data protection. Since the AI systems in these apps have directly led to violations of legal obligations and users' rights, this qualifies as an AI Incident under the framework, specifically under category (c) violations of human rights or breach of obligations under applicable law.
Thumbnail Image

AI隐私风暴升级 智谱清言、Kimi、字节猫箱、美团wow等被点名

2025-05-22
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI assistants and AI interactive applications) whose use has directly led to violations of personal data privacy laws and unauthorized data collection, constituting a breach of legal obligations protecting fundamental rights. This meets the definition of an AI Incident under category (c) for violations of human rights or breach of applicable law. The harms are realized, as the authorities have detected and publicly reported the violations, and the article details the privacy risks and regulatory implications. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

35款App违法违规收集使用个人信息 AI成"重灾区"

2025-05-23
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI-generated content tools and AI education apps among the 35 apps found to be illegally collecting and using personal information beyond authorized scope or unrelated to their business functions. This constitutes a violation of personal data protection laws and users' rights, which is a breach of applicable law intended to protect fundamental rights. The AI systems' development and use directly led to these harms. The event is not merely a potential risk but a realized incident with regulatory action underway. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.