Major Retailers in China Secretly Collect Massive Facial Recognition Data Without Consent

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple well-known retailers in China, including Kohler, BMW, and MaxMara, were exposed for secretly installing AI-powered facial recognition cameras in stores, collecting and analyzing customers' biometric data without consent. This widespread, unauthorized data collection violates privacy laws and poses significant risks to personal security and rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the deployment of AI-powered facial recognition cameras by companies like 万店掌 and others in numerous stores, which collect customers' facial data without their knowledge or consent. This constitutes a violation of privacy rights, a breach of fundamental rights protected by law. The collection and potential misuse of biometric data can cause harm to individuals' privacy and security. Although the company claims data is not leaked or retained, the unauthorized collection itself is a harm. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy through the use of AI systems.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

万店掌创始人回应收集人脸信息:只是帮客户做分析

2021-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the deployment of AI-powered facial recognition cameras by companies like 万店掌 and others in numerous stores, which collect customers' facial data without their knowledge or consent. This constitutes a violation of privacy rights, a breach of fundamental rights protected by law. The collection and potential misuse of biometric data can cause harm to individuals' privacy and security. Although the company claims data is not leaked or retained, the unauthorized collection itself is a harm. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy through the use of AI systems.
Thumbnail Image

该让“偷脸”的企业“丢丢脸”了

2021-03-17
新华网
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose misuse has directly led to violations of individuals' rights and harm to their privacy, which falls under harm category (c) - violations of human rights or breach of legal protections. The article describes realized harm through illegal data collection and exploitation, making this an AI Incident rather than a potential hazard or complementary information. The focus is on the harm caused by the AI system's use, not just potential risks or responses.
Thumbnail Image

悠络客被指违规收集人脸信息 星巴克百果园回应未"偷脸"

2021-03-17
新华网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used in surveillance equipment that collected biometric data without consent, constituting a violation of consumer privacy rights under applicable laws. The unauthorized collection of facial data is a direct breach of legal protections and consumer rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The companies' responses and ongoing investigations further confirm the materialization of harm rather than a mere potential risk.
Thumbnail Image

2021-03-18
光明网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) in retail environments to collect and analyze biometric data without proper consent, which directly violates personal data protection laws and infringes on individuals' privacy rights. This constitutes a violation of human rights and legal obligations protecting personal information, fitting the definition of an AI Incident. The article describes realized harm through unauthorized data collection and privacy breaches, not merely potential risks or general commentary, thus it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

光明时评:面子都被偷了,我们一点办法也没有?

2021-03-17
光明网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly—facial recognition technologies—that are used to collect and analyze personal biometric data. The article reports actual harms occurring, such as unauthorized data collection, misuse, and illegal sale of facial data, which violate privacy and personal information rights, a breach of fundamental rights under applicable law. The harms are realized, not just potential, and the article calls for regulatory and legal responses to address these violations. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to individuals' privacy and security.
Thumbnail Image

深圳将对公共场所摄像头立法

2021-03-18
中关村在线
Why's our monitor labelling this an incident or hazard?
The article mentions the use of facial recognition cameras (an AI system) that have been used to illegally obtain personal data, which constitutes a violation of privacy rights (a breach of obligations under applicable law protecting fundamental rights). This is a realized harm caused by the use of AI systems. However, the main focus of the article is on the legislative response to this issue rather than the incident itself. Since the article primarily reports on the new legislation as a response to previously reported harms, it is best classified as Complementary Information, providing context and governance response to an AI Incident that has occurred.
Thumbnail Image

“3・15”晚会曝光后 合肥涉事企业开始整改

2021-03-17
人民网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used by a company to collect personal biometric information without consumer consent, which is a violation of personal data protection laws and individual rights. The harm—unauthorized data collection and privacy infringement—has already occurred, as evidenced by the exposure and subsequent corrective actions. This meets the criteria for an AI Incident because the AI system's use directly led to a breach of legal and human rights protections. The company's response and remediation efforts do not negate the fact that harm occurred.
Thumbnail Image

今年3·15曝光来了!这些品牌被点名

2021-03-16
baotou.focus.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of AI systems (facial recognition cameras) by multiple brands to collect biometric data without informing or obtaining consent from consumers. This unauthorized collection of sensitive personal data constitutes a violation of privacy rights and personal information security, which is a breach of applicable laws protecting fundamental rights. The AI system's use directly leads to harm by threatening users' privacy and security. Hence, this qualifies as an AI Incident. Other harms described do not clearly involve AI systems or their misuse leading to harm, so they are not classified as AI Incidents or Hazards here.
Thumbnail Image

央视315晚会:苏州万店掌被指违规收集人脸数据

2021-03-17
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition technology) used for biometric data collection. The use is unauthorized and non-consensual, leading to violations of privacy rights and potentially other legal obligations. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, fitting the definition of an AI Incident. The harm is realized as the data has already been collected and processed without consent, impacting individuals' rights and privacy.
Thumbnail Image

多家知名商店安装人脸识别摄像头,科勒回应被央视3·15点名

2021-03-18
chinaz.com
Why's our monitor labelling this an incident or hazard?
Facial recognition cameras are AI systems that process biometric data, which is sensitive personal information. The unauthorized collection of such data without explicit consent constitutes a violation of personal data protection laws and individual rights. The event describes the use of these AI systems in a way that breaches legal requirements, thus causing a violation of rights. Although the company claims limited use and no data retention, the initial unauthorized collection has already occurred, constituting an AI Incident under the framework's definition of violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

MaxMara回应安装万店掌人脸识别摄像头: 仅提供访问店内人数

2021-03-18
chinaz.com
Why's our monitor labelling this an incident or hazard?
Facial recognition cameras are AI systems that process biometric data. The controversy involves alleged unauthorized collection of facial data, which implicates potential violations of privacy rights. However, the article reports on the controversy and company responses without confirming that harm or violations have actually occurred. There is no clear evidence that the AI system's use has directly or indirectly led to realized harm yet, only concerns and denials. Therefore, this event represents a plausible risk of harm related to AI use (privacy violations) but no confirmed incident. It fits the definition of an AI Hazard, as the use or misuse of AI facial recognition could plausibly lead to violations of rights or harm to individuals if unauthorized data collection occurs.
Thumbnail Image

万店掌创始人回应收集人脸信息:数据不会泄露和保留

2021-03-18
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition cameras) used to collect and analyze biometric data of customers without their informed consent, which is a violation of privacy rights and data protection laws. The harm is realized as the data collection has already occurred and affects individuals' rights. The company's response and ongoing investigation are complementary but do not negate the incident. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through privacy violations.
Thumbnail Image

3·15晚会"人脸识别"遭猛捶 如何找到"体面的"反人脸识别方法? - 安全 - cnBeta.COM

2021-03-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—facial recognition technologies used by various businesses to collect and analyze consumer facial data without consent. This use has directly led to privacy violations, a breach of fundamental rights, which constitutes harm under the framework. The article also describes the societal response and technological countermeasures but focuses primarily on the realized harm from unauthorized facial data collection and surveillance. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

多商家被曝"偷脸" 人脸识别监控错在哪? - 视点·观察 - cnBeta.COM

2021-03-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) that collect and process biometric data without consumer consent, which constitutes a violation of personal privacy rights and applicable laws. This misuse of AI has directly led to harm in the form of privacy violations and potential legal breaches, fulfilling the criteria for an AI Incident. The article details the scope of data collection, the lack of informed consent, and the legal consequences, confirming realized harm rather than just potential risk or complementary information.
Thumbnail Image

被错用的"人脸识别",四家企业被爆出,涉及完成产业链 - 视点·观察 - cnBeta.COM

2021-03-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition systems by several companies to collect and use biometric data without user consent, violating legal regulations on personal information protection. The harms include violations of privacy rights, unauthorized data collection, and potential identity fraud risks, which fall under violations of human rights and legal obligations. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The detailed description of the industry chain and the actual ongoing misuse confirms realized harm rather than potential harm.
Thumbnail Image

起底人脸识别产业链:朱啸虎布局,旷视等"四小龙"谋求IPO

2021-03-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (facial recognition technology) used by merchants to collect and process biometric data without consumer consent, which constitutes a violation of privacy rights and personal data protection laws. The misuse of AI in this context has directly led to harm in terms of violations of fundamental rights (privacy) and poses risks of further harm through data breaches and identity theft. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to realized harm and legal violations.
Thumbnail Image

被"3·15"点名,这些企业如此回应:你接受吗? - 视点·观察 - cnBeta.COM

2021-03-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems by companies to secretly collect user biometric data and apply discriminatory pricing strategies, which is a violation of privacy and consumer rights. The large-scale collection and misuse of personal data, including resumes sold on black markets, further demonstrate breaches of labor and privacy rights facilitated by AI or data-driven platforms. The harms are direct and realized, including privacy violations and consumer deception. The involvement of AI systems in these harms meets the criteria for AI Incidents as defined, since the AI systems' use and misuse have directly led to violations of rights and harm to consumers. Other issues like deceptive advertising and product defects, while serious, are not clearly linked to AI systems in this article and thus do not affect the classification.
Thumbnail Image

喜茶被曝使用万店掌人脸识别监控 官方回应:未非法收集人脸信息 - 最新消息 - cnBeta.COM

2021-03-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI facial recognition technology to collect and process personal biometric data without consumer consent, which constitutes a violation of privacy rights and potentially applicable laws protecting fundamental rights. The AI system's use has directly led to harm in terms of privacy infringement and unauthorized surveillance, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The denial by the brand does not negate the reported harm caused by the AI system's deployment by its supplier and other companies.
Thumbnail Image

人脸识别摄像头为什么安不得?脸被偷了怎么办? - 视点·观察 - cnBeta.COM

2021-03-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition cameras) that automatically collect and analyze biometric data without informed consent, which is a direct breach of privacy rights and legal protections. The harm is realized as personal data is collected unlawfully on a large scale, posing risks to individuals' privacy and potentially leading to further misuse or data breaches. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and legal obligations related to personal data protection.
Thumbnail Image

正通汽车4S店连夜拆除摄像头 员工称仅用于内部营销分析 - 安全 - cnBeta.COM

2021-03-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used by ZhengTong Auto's 4S stores to capture and analyze customer facial data without consent, which is a direct violation of privacy rights and personal information protection laws. The use of AI here has directly led to harm in terms of legal and rights violations, as well as public trust damage. The removal of the cameras and company statements are responses to the incident, but the core issue remains an AI Incident because the AI system's use caused the harm. This fits the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

宝马回应4s店使用摄像头收集人脸信息

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The use of facial recognition cameras involves AI systems processing biometric data, which implicates privacy and potentially human rights concerns. The news reports that large-scale collection of facial data has occurred, which constitutes a violation of privacy rights and thus a breach of obligations intended to protect fundamental rights. Since the collection has already happened, this is a realized harm rather than a potential one. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in causing harm related to human rights violations.
Thumbnail Image

深圳名表维修中心消磁要天价

2021-03-16
人民网
Why's our monitor labelling this an incident or hazard?
The facial recognition technology is an AI system used to collect personal biometric data without consent, directly violating privacy and data protection rights, which fits the definition of an AI Incident (violation of human rights). Similarly, the recruitment platforms' misuse of AI-enabled data systems to leak personal information also constitutes a violation of rights. The watch repair issue, while harmful to consumers, does not involve AI systems. Therefore, the main AI-related harms are the unauthorized facial recognition and data leaks, qualifying the event as an AI Incident.
Thumbnail Image

2021-03-18
人民网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems, specifically facial recognition technology, which is used to collect and process biometric data without consent, leading to violations of privacy and potential harm to individuals' property and rights. The misuse and sale of personal data constitute realized harm under violations of human rights and privacy. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to harm through privacy breaches and unauthorized data exploitation.
Thumbnail Image

突然间,ImageNet中的人脸就「变糊」了_详细解读_最新资讯_热点事件_36氪

2021-03-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (face detection via Amazon Rekognition) in the development and modification of a major AI training dataset (ImageNet). However, the article focuses on the dataset's adjustment to reduce privacy risks and does not report any realized harm or incident caused by AI. Instead, it highlights a governance and ethical response to potential privacy harms. Therefore, this is best classified as Complementary Information, as it provides important context and updates on AI ecosystem practices and responses rather than describing an AI Incident or Hazard.
Thumbnail Image

违法进行人脸识别有什么法律后果?315晚会已打响第一枪_详细解读_最新资讯_热点事件_36氪

2021-03-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of facial recognition technology used unlawfully to collect and process personal data without consent. This misuse has directly caused harm by infringing on individuals' privacy rights and potentially endangering personal and societal security. The exposure of these practices and the resulting legal consequences confirm that harm has materialized, meeting the criteria for an AI Incident. The detailed discussion of criminal, administrative, and civil penalties further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

逛街竟然被"偷脸"!科勒致歉:监控仅作人数统计 已连夜拆除

2021-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI systems (facial recognition technology) to collect and process biometric data of individuals without their knowledge or consent, which is a violation of personal privacy and legal rights. This use of AI has directly led to harm in terms of privacy violations and potential threats to user security. The company's acknowledgment and corrective actions do not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and privacy.
Thumbnail Image

315曝光背后,人脸数据比设备号更诱人_详细解读_最新资讯_热点事件_36氪

2021-03-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely facial recognition technologies used for data collection and marketing. The unauthorized collection and use of facial data without consent directly violates privacy rights and applicable personal information protection laws, constituting harm to individuals' rights. The article documents actual incidents of privacy breaches and unauthorized data use, not just potential risks. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and legal obligations protecting personal data privacy.
Thumbnail Image

央视315头四炮打向互联网:你的脸、你的简历就这样被"偷了"_详细解读_最新资讯_热点事件_36氪

2021-03-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems such as facial recognition cameras and data analytics platforms that process personal information. The misuse and unauthorized collection of biometric data and personal resumes, as well as the deceptive targeting of elderly users through apps that collect extensive device information, directly lead to privacy violations and potential exploitation. The harms are realized and documented, including unauthorized data capture, data sales on black markets, and misleading advertising practices. These constitute violations of fundamental rights and legal obligations, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a report of actual harms caused by AI system misuse.
Thumbnail Image

315曝光的这个问题背后,一个千亿级的产业正在崛起_详细解读_最新资讯_热点事件_36氪

2021-03-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI and big data systems used to collect, analyze, and trade personal data, leading to privacy violations and targeted deceptive advertising, which constitute harm to individuals and communities. The involvement of AI in profiling and recommendation systems is clear, and the harms are realized, not hypothetical. The discussion of privacy computing and policy changes is complementary information but does not negate the presence of actual harms caused by AI systems. Hence, this qualifies as an AI Incident due to realized violations of rights and harms caused by AI-enabled data misuse.
Thumbnail Image

人脸识别技术被滥用,消灭"偷脸贼"势在必行

2021-03-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system—facial recognition technology—to capture and share personal biometric data without consent, which constitutes a violation of privacy rights and potentially other legal protections. The article describes actual harm occurring through unauthorized data collection and sharing, which is a breach of fundamental rights and a clear AI Incident under the framework. The involvement of AI is explicit (facial recognition), the harm is realized (privacy violations, potential financial risks), and the incident has been publicly exposed and acknowledged. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

那时候,我们觉得让商家"更懂你"是一件很酷的事儿

2021-03-17
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI-based facial recognition systems by suppliers and businesses to collect, analyze, and share personal biometric data without informed consent, leading to violations of privacy and rights. The harms are realized and documented, including unauthorized data sharing, profiling, and discriminatory treatment, which fall under violations of human rights and harm to communities. The AI system's malfunction or misuse is directly linked to these harms. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

人脸识别规范失序,如何拯救信任危机?

2021-03-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The article details multiple instances where this AI system has been used without proper consent, leading to violations of privacy rights and potential financial and social harms (e.g., unauthorized data collection, identity fraud). These constitute direct harms to individuals' rights and property, fulfilling the criteria for an AI Incident. The article also discusses systemic misuse and lack of regulation, which contribute to ongoing and realized harms. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

你的脸正在被偷走,你却对此无能为力

2021-03-16
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based facial recognition systems by commercial entities to collect and analyze biometric data without user consent, leading to privacy violations and illegal data trade. The harms are direct and significant, including breaches of personal information rights and the facilitation of criminal activities such as identity fraud and scams. The AI system's development and use are central to these harms, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

被315曝光 正通4S店连夜拆除人脸识别摄像头!员工如此表态

2021-03-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used without customer consent to collect and analyze personal data, which is a violation of personal information protection laws and privacy rights. The harm is realized as the unlawful surveillance and data collection occurred, leading to legal and social consequences. The immediate removal of the cameras is a response to the incident but does not negate the fact that the AI system's use caused harm. Hence, this qualifies as an AI Incident under violations of human rights and applicable law.
Thumbnail Image

经济观察报:人脸识别被滥用只是数据隐私失范的一角

2021-03-16
每日经济新闻
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The misuse described involves unauthorized collection and sharing of sensitive personal data, violating privacy rights and potentially other legal protections. This misuse has directly led to harm in terms of violations of fundamental rights (privacy) and breaches of applicable laws protecting personal data. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

喜茶被曝使用万店掌人脸识别监控!官方回应:未非法收集人脸信息

2021-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used for unauthorized surveillance and data collection, which is a violation of human rights and privacy laws. The harm is realized as consumers' facial data is collected and stored without consent, constituting a breach of obligations under applicable law intended to protect fundamental rights. The involvement of AI in the development and use of these facial recognition systems directly leads to this harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

那些偷走你"人脸"的公司:有的刚刚融资8000万 还有马上科创板上市

2021-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-powered facial recognition systems to collect and analyze personal biometric data without proper consent, violating legal standards and consumer rights. This misuse has directly harmed individuals by infringing on their privacy and personal information protection, which qualifies as a violation of human rights and applicable laws. The involvement of AI systems in the collection, processing, and labeling of facial data is clear, and the harms are realized and ongoing. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

良品铺子回应使用人脸识别监控:用于物流仓储 未在线下门店使用

2021-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition) used in logistics operations, but there is no reported or implied harm to individuals or groups, no violation of rights, and no disruption caused. The company's response and the supplier's self-audit efforts indicate a governance and compliance context rather than an incident or hazard. Therefore, this is best classified as Complementary Information, providing context and updates related to AI use and public concerns about privacy.
Thumbnail Image

正通汽车回应央视315晚会报道:停止所有设备使用 开展自查自验

2021-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The use of facial recognition technology constitutes an AI system. The unauthorized collection and analysis of personal biometric data without customer knowledge or consent is a violation of privacy rights and likely breaches applicable laws protecting personal data and fundamental rights. This constitutes a violation of human rights and legal obligations (definition c). The harm is realized as customers' personal information was collected and processed without consent, which is a direct harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

看完315晚会后 我不要"脸"了......

2021-03-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically facial recognition technologies, and details their misuse and data breaches leading to privacy violations and potential harm to individuals' rights and property (personal data). The exposure of unauthorized data collection and sale of facial data, as well as the societal consequences described, meet the criteria for an AI Incident because harm to rights and privacy has occurred. The article also discusses ongoing and pervasive use of these AI systems causing direct harm or risk thereof, not just potential future harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

正通汽车4S店连夜拆除摄像头,员工称仅用于内部营销分析

2021-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used by ZhengTong Auto's 4S stores to collect and analyze customer facial data without consent, which is a violation of personal information protection laws and customers' rights. The use of AI here directly caused harm in the form of privacy violations and legal breaches. The removal of cameras and company responses are reactions to this harm. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use.
Thumbnail Image

那些偷走你"人脸"的公司:有的融资8000万,还有的即将上市

2021-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely facial recognition technologies used by companies to collect and analyze biometric data without proper consent, which constitutes a violation of personal information protection laws and consumer rights. The misuse and unauthorized collection of facial data directly harm individuals' privacy and rights, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The article describes realized harm through unlawful data collection and potential legal consequences, not just potential risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

触目惊心!3·15晚会曝光了这9大消费黑幕!最新进展来了

2021-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems in multiple retail locations where consumers' biometric data is collected without their knowledge or consent, violating privacy rights. This is a direct breach of human rights and legal obligations protecting personal data. The AI system's use (facial recognition) is central to the harm described. Other harms mentioned are not directly linked to AI systems. Hence, the event is classified as an AI Incident due to realized harm caused by AI system misuse.
Thumbnail Image

科勒回应门店违规采集人脸信息:已连夜拆除摄像设备

2021-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The use of facial recognition cameras implies an AI system involved in biometric data processing. The unauthorized collection of facial data constitutes a violation of privacy and data protection laws, which falls under violations of human rights or legal obligations. The company's response to remove the equipment and cooperate with authorities confirms the recognition of the harm. Therefore, this event is an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

人脸识别摄像头为什么安不得?脸被偷了怎么办?

2021-03-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition cameras) that automatically collect and analyze biometric data without informed consent, leading to violations of personal privacy and data protection laws. This constitutes a breach of fundamental rights under applicable law, fulfilling the criteria for an AI Incident under the OECD framework. The harm is realized as unauthorized data collection and potential misuse of sensitive biometric information, which can cause significant privacy harm to individuals and communities. The article also references legal actions and regulatory frameworks addressing these harms, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315"人脸识别"遭猛捶,我找到了"体面的"反人脸识别方法

2021-03-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI facial recognition systems that collect and process biometric data without user consent, leading to privacy violations and potential breaches of rights. This meets the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights). The article also discusses various responses and countermeasures, but the primary focus is on the realized harms from facial recognition misuse. Therefore, the event is classified as an AI Incident.
Thumbnail Image

被"3·15"点名,这些企业如此回应!你接受吗?

2021-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the deployment of AI systems (facial recognition cameras) by companies to collect and analyze consumer facial data without consent, which directly violates consumer privacy rights and applicable laws. This constitutes an AI Incident as the AI system's use has directly led to harm (violation of rights). The misuse and sale of personal data, including resumes, and the use of AI-powered apps that collect data and push deceptive ads, also represent realized harms linked to AI systems. The article also mentions company apologies and regulatory investigations, which are complementary information but do not negate the presence of an AI Incident. Therefore, the event is best classified as an AI Incident due to the direct and ongoing harms caused by AI system misuse and violations of consumer rights.
Thumbnail Image

"3·15"晚会揭露了哪些不为人知的消费陷阱?你被套路了吗?

2021-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition systems by multiple companies without consumer consent, which is an AI system. The unauthorized collection and use of biometric data constitute a violation of human rights and legal protections related to privacy. This harm is realized and directly linked to the AI system's use. Other reported harms, such as resume data leaks, deceptive apps, and product defects, do not clearly involve AI systems or their malfunction. Hence, the classification as an AI Incident is based on the facial recognition misuse causing direct harm.
Thumbnail Image

人脸识别技术被滥用,怎么办?专家解答

2021-03-17
北青网
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identification and verification. The article reports on its unauthorized use by merchants without consent, which constitutes a violation of privacy rights and could lead to harms such as identity theft and fraud. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (privacy violations and potential financial harm). The article also discusses the need for regulation and ethical use, but the primary focus is on the realized misuse and harm, not just potential or complementary information.
Thumbnail Image

数字化,信息化野蛮时代终结

2021-03-16
m.thepaper.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use and misuse of AI systems, such as facial recognition technology and data-driven digital marketing, leading to direct harms including privacy violations and unauthorized data sales. These constitute violations of human rights and consumer rights, fitting the definition of an AI Incident. The involvement of AI systems is clear, the harms are realized, and the article details multiple concrete examples of such harms.
Thumbnail Image

Z博士的脑洞|你的脸就是你的资产,别借技术蹬鼻子“上脸”

2021-03-16
The Paper
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of facial recognition technology by businesses to collect and analyze personal biometric data without informed consent, leading to violations of privacy rights. The Facebook lawsuit exemplifies a concrete AI Incident where harm occurred due to unauthorized data collection and storage. The article also references ongoing legal and regulatory responses, but the primary focus is on realized harms caused by AI misuse, fitting the definition of an AI Incident.
Thumbnail Image

喜茶回应使用万店掌摄像头:不会非法收集人脸数据

2021-03-16
Techweb
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used in retail stores to collect biometric data. The unauthorized collection of facial data without informed consent constitutes a violation of personal privacy rights and applicable data protection laws, which falls under violations of human rights and legal obligations. The exposure and investigations confirm that harm to individuals' privacy and data security has occurred or is ongoing. Therefore, this qualifies as an AI Incident due to realized harm linked to the use of AI facial recognition systems.
Thumbnail Image

角落里的摄像头,正在这样泄露你的隐私

2021-03-16
huxiu.com
Why's our monitor labelling this an incident or hazard?
Facial recognition cameras are AI systems that analyze biometric data to identify or track individuals. The article reveals that these systems are installed in many stores and are collecting facial data, which is then used in ways that compromise privacy. This is a direct violation of privacy rights and possibly legal protections, constituting harm to individuals. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

2021-03-16
雪球
Why's our monitor labelling this an incident or hazard?
Although facial recognition cameras and recruitment platforms likely involve AI systems, the article does not report any incident where the AI system's development, use, or malfunction directly or indirectly caused harm as defined by the framework. The issues are primarily about privacy concerns, unauthorized data access, or product defects unrelated to AI system failures. Therefore, this is best classified as Complementary Information providing context on AI-related technologies in consumer issues but not describing a specific AI Incident or AI Hazard.
Thumbnail Image

人脸识别摄像头为什么安不得?脸被偷了怎么办?

2021-03-17
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition cameras with AI capabilities) that collect and process personal biometric data without informed consent, leading to violations of privacy rights and legal protections. The harm is realized as personal data is collected and potentially exposed, which fits the definition of an AI Incident due to violations of human rights and applicable laws protecting personal information. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in enabling this mass data collection and profiling.
Thumbnail Image

媒体:多商家被曝“偷脸” 人脸识别监控错在哪?

2021-03-18
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used in retail environments to collect and analyze personal biometric data without informed consent, which is a violation of privacy rights and applicable laws. This misuse of AI has directly caused harm by infringing on individuals' fundamental rights to privacy and data protection. The scale of data collection and the lack of transparency exacerbate the harm. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to personal data privacy.
Thumbnail Image

科勒致歉:已安排相关门店连夜拆除涉事摄像设备

2021-03-16
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used in a way that violates personal data rights and privacy, which constitutes a breach of obligations under applicable law protecting fundamental rights. The unauthorized collection of biometric data without consent is a direct violation of privacy rights, qualifying as an AI Incident under the framework. The company's response and regulatory investigations are complementary information but do not negate the incident classification.
Thumbnail Image

数字时代如何保护个人信息安全

2021-03-18
it.cri.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly—facial recognition technology is an AI system that processes biometric data to generate outputs influencing decisions and actions. The misuse and unauthorized collection of facial data have directly led to violations of privacy rights and harm to individuals' property security, fulfilling the criteria for an AI Incident. The article describes actual harm occurring due to the AI system's use, including data breaches and illegal resale of personal information, as well as regulatory investigations and legal actions. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

央视曝光商家收集人脸信息 到底谁在"偷脸"?

2021-03-18
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment and use of AI facial recognition systems by commercial businesses and government entities to collect and analyze personal biometric data without consent, violating privacy rights. The harm is direct and ongoing, as customers' facial data is collected and analyzed without permission, infringing on fundamental rights. The involvement of AI systems in the development and use stages is clear, and the resulting privacy violations and potential human rights breaches meet the criteria for an AI Incident. Although the report also discusses broader governance and societal issues, the core event is the unauthorized use of AI facial recognition causing harm.
Thumbnail Image

央视轰商家收集人脸识别信息

2021-03-16
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The use of facial recognition AI systems by businesses to collect biometric data without consent constitutes a violation of privacy rights and a threat to property and personal security, which fits the definition of harm (c) under AI Incident. The article explicitly mentions the unauthorized collection of facial data and the criticism of this practice as a serious threat to users. The involvement of AI systems (facial recognition) is explicit. Although the dataset creation is mentioned, it serves as complementary context rather than the main harm. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

3.15曝光一件恶心事,我却看到赚钱风口!

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through facial recognition technology used in stores, real estate sales, public facilities, and other contexts. The use of these AI systems leads to unauthorized biometric data collection and profiling without consent, which is a violation of privacy rights and can facilitate further harms like financial theft or unauthorized access. Since these harms are occurring as described, this qualifies as an AI Incident under the framework, specifically under violations of human rights and privacy (c).
Thumbnail Image

聚焦"3·15"

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems installed covertly in thousands of stores collecting biometric data without consent, violating personal information security laws. It also details recruitment platforms where AI-enabled data access leads to large-scale personal data leaks and sales on black markets. These are direct violations of privacy and personal data rights, fulfilling the criteria for an AI Incident under violations of human rights and privacy. The AI systems' development and use have directly led to these harms, not just potential future risks.
Thumbnail Image

#科勒连夜拆除摄像设备#登上热搜网友:光道歉拆除就行了?

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used in a way that collected personal biometric data without proper consent, which is a violation of privacy rights. The harm (privacy violation) has already occurred as exposed by the CCTV 3.15 report, and the company's response is a remediation step. The presence of AI and the direct link to harm (privacy breach) classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

拥有大量人脸数据摄像头厂家被查

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI facial recognition systems by companies and manufacturers to collect and store biometric data without consumer consent, violating laws protecting personal information and privacy. This constitutes a breach of human rights and legal obligations, fulfilling the criteria for an AI Incident. The harm is realized, as unauthorized data collection and storage have occurred, and regulatory investigations are underway. The AI system's development and use directly led to violations of consumer rights and privacy, which are harms under the defined framework.
Thumbnail Image

科勒回应人脸识别摄像头:仅作到店人数统计,已连夜拆除

2021-03-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and use of facial recognition AI systems that collected sensitive biometric data without informed consent, violating privacy rights and legal protections. This constitutes a breach of obligations under applicable law intended to protect fundamental rights. The harm (privacy violation) has already occurred, making this an AI Incident. The company's removal of the cameras is a response but does not negate the fact that harm took place.
Thumbnail Image

被央视曝光后,这些企业连夜道歉

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems, specifically facial recognition technology, which is used covertly by companies to collect and process biometric data without user consent. This unauthorized data collection directly violates personal privacy rights, a fundamental human right, and poses risks to users' security and property. The involvement of AI in these privacy violations and the resulting harm to individuals' rights and privacy clearly qualifies this as an AI Incident. The article also describes regulatory responses and company apologies, but the primary focus is on realized harm caused by AI misuse, not just potential or complementary information.
Thumbnail Image

人脸识别滥用遭3・15点名 万店掌、悠络客陆续致歉整改

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition based on computer vision) used in retail settings. The companies' misuse of these AI systems to collect facial data without proper consent has directly led to violations of privacy rights, which is a breach of obligations under applicable law protecting fundamental rights. The public naming and subsequent apologies and corrective actions confirm that harm has occurred. Hence, this is an AI Incident as the AI system's use has directly caused harm to individuals' rights and privacy.
Thumbnail Image

数据贩卖受关注,智联招聘等企业被曝光

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems, such as facial recognition cameras with high accuracy rates, used without consumer consent to collect and process biometric data, which is a clear AI system involvement. The misuse and sale of personal data from recruitment platforms also involve AI-driven data processing and management systems. The harms include violations of privacy rights and potential for fraud and other criminal activities, which are direct harms to individuals' rights and safety. The incidents have already occurred and caused harm, meeting the criteria for AI Incidents. The article does not merely warn of potential harm but documents realized harm and ongoing misuse, thus not qualifying as AI Hazard or Complementary Information.
Thumbnail Image

315晚会曝光人脸识别乱象 科勒卫浴全国门店均安装

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems with facial recognition capabilities used to capture and store biometric data of customers without their knowledge or consent, violating legal requirements and personal privacy rights. This unauthorized data collection and processing is a direct breach of fundamental rights and legal obligations, fulfilling the criteria for an AI Incident due to violations of human rights and personal data protection laws. The harm is realized and ongoing, not merely potential, as customers' sensitive biometric data is being collected and stored without consent, which can lead to serious privacy and security harms.
Thumbnail Image

科勒连夜拆除摄像设备:网友群情激愤?

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of facial recognition technology, an AI system, to collect biometric data without consent, violating personal information security laws and infringing on privacy rights. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, qualifying as an AI Incident. The harm is realized as consumers' biometric data was collected without authorization, posing risks to their privacy and security.
Thumbnail Image

315晚会吃瓜:收集人脸公司有知名投资方,公关在KTV待命,特斯拉没资格上

2021-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of facial recognition technologies used by companies to collect biometric data without proper consent, which is a direct violation of personal privacy rights (a breach of fundamental rights). The involvement of AI in data collection and processing is clear, and the harms are realized as these practices have been exposed and criticized. Additionally, the misuse of AI or algorithmic systems in recruitment platforms leading to personal data leakage and misleading medical advertisements further supports the classification as an AI Incident. The article does not merely warn of potential harm but reports on actual privacy violations and data misuse involving AI systems, fulfilling the criteria for an AI Incident.
Thumbnail Image

沪上这些大牌如此“偷脸”

2021-03-16
上海热线
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used in retail stores to collect and process biometric data without consent, which breaches legal requirements for personal information protection. This unauthorized data collection and tracking of individuals' movements constitutes a violation of rights under applicable law, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The harm is realized as customers' privacy rights are infringed upon through covert data collection and profiling.
Thumbnail Image

喜茶回应使用万店掌摄像头 网友:正常监控录像还不够用?

2021-03-16
上海热线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems to capture and process personal biometric data without user consent, leading to violations of privacy and data protection laws. This constitutes a breach of fundamental rights and legal obligations, fulfilling the criteria for an AI Incident under the framework. The harm is realized as consumers' personal data is collected and used without their knowledge or permission, directly linked to the AI system's use.
Thumbnail Image

人脸识别滥用遭315点名 万店掌、悠络客致歉整改

2021-03-17
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition based on computer vision) used by companies that have been found to have violated regulations by illegally collecting facial data. This constitutes a breach of privacy rights, a violation of applicable law protecting fundamental rights. The companies' apologies and corrective measures confirm the recognition of harm. Hence, the event meets the criteria for an AI Incident due to the realized harm (violation of rights) caused by the use of AI systems.
Thumbnail Image

315曝光9弹涉多家企业

2021-03-16
上海热线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems in stores collecting biometric data without consent, which is a violation of personal information rights and poses privacy and security harms. Similarly, recruitment platforms use algorithmic systems to manage and provide access to personal resumes, which are being misused and leaked, causing privacy violations. The phone cleaning apps use profiling and automated prompts that lead to scams targeting elderly users, indicating AI-driven harm. These harms are realized and directly linked to AI system use or misuse, fitting the definition of AI Incidents. Other reported issues do not involve AI systems and thus are unrelated to AI harms.
Thumbnail Image

央视曝光滥用人脸识别后:企业们忙着澄清、拆除和"装死"

2021-03-16
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems—facial recognition technology—that have been used to collect sensitive biometric data without consent, violating privacy rights and legal protections. The harm is realized as individuals' personal data is forcibly collected, posing risks to their personal security and rights. The companies' actions constitute misuse of AI systems leading to violations of fundamental rights, fulfilling the criteria for an AI Incident. The exposure and subsequent company responses do not negate the occurrence of harm but rather confirm it. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

"亮相"央视3·15晚会,苏州"万店掌"相关人员被带走调查

2021-03-16
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition technology) used for collecting and analyzing personal biometric data on a large scale. The system's deployment has resulted in an official investigation and detainment of personnel, indicating that harm related to privacy violations and potential breaches of legal obligations has occurred. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of human rights or legal protections. The investigation and public exposure confirm that harm is realized, not just potential.
Thumbnail Image

多个品牌门店摄像头能给人脸编号?网友:口罩有用吗......

2021-03-16
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: facial recognition cameras that automatically capture and assign IDs to individuals. The use of these AI systems for biometric data collection without consent breaches legal frameworks protecting personal and sensitive information, constituting a violation of human rights and personal data protection laws. The harm is realized as individuals' biometric data is collected and tracked without authorization, impacting privacy rights. The company's acknowledgment and remedial actions confirm the incident's materialization. Hence, this is an AI Incident involving direct harm through misuse of AI facial recognition technology.
Thumbnail Image

人脸识别技术被滥用,怎么办?专家解答来了......

2021-03-16
每日经济新闻
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system involved in biometric data processing. The article describes unauthorized collection and potential misuse of facial data, which could lead to violations of privacy rights and personal data protection laws, constituting harm to individuals' rights. While no concrete harm is reported as having occurred, the misuse and lack of consent plausibly could lead to AI incidents involving privacy violations. Therefore, this situation is best classified as an AI Hazard, as it outlines credible risks of harm from AI misuse without reporting a specific realized incident.
Thumbnail Image

人脸识别遭滥用,不能摄像头一拆了事

2021-03-16
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data to identify individuals. The misuse and unauthorized use of such systems, as reported, directly leads to violations of privacy and potentially endangers individuals' rights and safety, constituting harm under the framework. The article describes realized harm (privacy breaches and risks to life and property) caused by the AI system's use, qualifying it as an AI Incident. The mention of regulatory responses supports the seriousness of the harm but does not change the classification.
Thumbnail Image

无处不在的面部识别技术,究竟"恐怖"在哪里?

2021-03-16
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of facial recognition AI systems for unauthorized biometric data collection and analysis without informed consent, violating personal information security regulations. This constitutes a breach of fundamental rights (privacy and data protection). The AI system's use directly leads to harm through privacy violations and potential misuse of sensitive data. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

“3•15”曝光问题追踪|为商家提供人脸识别监控摄像头被曝光,苏州“万店掌”正配合调查

2021-03-17
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event clearly describes the deployment and use of an AI system (facial recognition cameras) that collects sensitive biometric data without user consent, violating legal protections and personal rights. This unauthorized data collection and analysis constitutes a breach of fundamental rights and personal privacy, which is a recognized form of harm under the AI Incident definition (violation of human rights and breach of legal obligations). The company's cooperation with authorities and product removal are responses to the incident, but do not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

宝马店通过人脸识别抓取个人信息

2021-03-16
早报
Why's our monitor labelling this an incident or hazard?
The use of facial recognition technology is an AI system application. The unauthorized collection of personal data without consent breaches privacy rights, which are protected under applicable laws and human rights frameworks. This constitutes a violation of rights due to the AI system's use, thus qualifying as an AI Incident.
Thumbnail Image

又是一年3·15 被围猎的用户由谁来保护?

2021-03-19
杭州网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems like facial recognition technology and data analytics platforms that collect and misuse sensitive personal information without consent, constituting violations of privacy and personal data protection laws. The illegal sale of personal data and the deployment of AI-driven advertising algorithms that spread false medical claims further demonstrate direct harm to individuals and communities. These harms align with violations of human rights and breaches of legal obligations protecting personal data and privacy, fitting the definition of AI Incidents. The article also includes responses from companies and experts, but the primary focus is on the incidents themselves, not just complementary information.
Thumbnail Image

西安两个小区启用人脸识别 门禁卡作废业主回家成难题

2021-03-17
华商网
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used for access control. Its deployment and mandatory use have directly caused harm by restricting access to residents who refuse to register their facial data, effectively invalidating their previous access method (door cards). This has led to practical harm (difficulty entering homes) and raises significant privacy and data protection concerns, which are violations of human rights. The event involves the use of AI systems leading to realized harm, not just potential harm, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

万店掌创始人回应被3·15点名收集人脸信息:将公示整改措施

2021-03-15
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The collection of facial data implies the use of AI systems for facial recognition or analysis. The event centers on the company's acknowledgment and planned remediation following public scrutiny, but does not report any realized harm such as privacy violations or data breaches. Therefore, this is best classified as Complementary Information, as it provides an update on a previously identified AI-related concern and the company's response, rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

西安已有小区启用人脸识别系统 业主:希望“刷卡”“刷脸”同时使用

2021-03-17
华商网
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) in a real-world setting (residential communities) where the system's deployment has led to residents' privacy and data security concerns. The forced replacement of access cards with facial recognition without clear consent and transparency constitutes a violation of personal rights and data protection obligations. These issues align with the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. Although no physical harm is reported, the privacy and consent issues are significant harms. Hence, the event is classified as an AI Incident.
Thumbnail Image

3·15晚会曝光完整名单:360搜索、福特汽车…

2021-03-16
bbs.hsw.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of AI systems (facial recognition cameras) that collect biometric data without consent, violating legal norms and consumer rights. The harm is realized as consumers' personal biometric information is collected secretly, posing risks to privacy and security. The involvement of AI in the development and use of these facial recognition systems directly leads to this harm, fulfilling the criteria for an AI Incident under violations of human rights and legal protections.
Thumbnail Image

被“3·15晚会”曝光的这些企业,连夜回应

2021-03-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the use of facial recognition cameras and machine learning for security in recruitment platforms. The harms include violations of privacy rights and illegal data sales, which are breaches of fundamental rights and obligations under applicable law. These harms have already occurred, making this an AI Incident. The article also includes company responses, which are complementary information, but the primary classification is AI Incident due to the realized harms caused by AI system use and misuse.
Thumbnail Image

2021-03-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The reported illegal collection of facial data by AI-enabled surveillance devices directly implicates an AI system's use leading to a violation of privacy rights, a breach of applicable law protecting fundamental rights. The company's response to draft announcements and remediation does not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use in unauthorized facial data collection.
Thumbnail Image

万店掌回应:紧急自查 良品铺子喜茶晨光文具均为其客户

2021-03-16
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (intelligent store monitoring cameras) but does not report any direct or indirect harm caused by their use, nor does it indicate plausible future harm. The companies are responding to concerns and clarifying facts, which constitutes a governance or societal response. Therefore, this is Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

正通逾百家4S店被3·15点名:曾深陷资金危机,卖股委身国资

2021-03-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition technology) used by ZhengTong Auto Group's 4S stores to collect personal biometric data without customer consent, which is a direct violation of privacy rights and applicable laws protecting personal data. This constitutes a breach of fundamental rights under applicable law, fulfilling the criteria for an AI Incident. The harm is realized as customers' personal information was collected unlawfully, and the company has responded by stopping the use of the system and initiating a self-examination. Therefore, this is classified as an AI Incident due to the direct involvement of AI in causing a violation of rights and harm to individuals.
Thumbnail Image

蓝鲸315丨瑞为回应被央视点名:成立专项工作组,就客流分析系统涉及的数据安全进行深入自查

2021-03-16
金融界网
Why's our monitor labelling this an incident or hazard?
The intelligent cameras use AI-based facial recognition technology to capture personal data without informed consent, which constitutes a violation of privacy rights and data protection laws. This is a direct harm to individuals' rights and privacy, fitting the definition of an AI Incident under violations of human rights or breach of legal obligations. The company's response to investigate and mitigate risks is a complementary action but does not negate the incident itself. Therefore, this event is classified as an AI Incident.
Thumbnail Image

315晚会惊曝多家知名商店安装人脸识别摄像头 严重威胁用户财产安全、隐私安全等

2021-03-16
finance.3news.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used without informed consent, leading to unauthorized biometric data collection. This directly violates personal privacy and legal regulations, constituting harm to human rights and privacy. The harm is realized, not just potential, as users' biometric data is being collected and stored secretly, threatening their privacy and property security. Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

科勒、Max Mara回应使用万店掌摄像头 万店掌自查中

2021-03-16
金融界网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used in retail environments to collect biometric data without informed consent, which is a violation of privacy rights and personal data protection laws. The harm is realized as customers' biometric data were collected and tracked without their knowledge, posing risks to their privacy and security. The companies' responses indicate acknowledgment of the harm and efforts to mitigate it. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems causing violations of human rights (privacy) and potential breaches of applicable laws protecting personal data.
Thumbnail Image

正通回应称已第一时间停止所有设备使用,并开展自查自检以核实情况。

2021-03-16
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used to collect biometric data without consent, which is a violation of legal and human rights protections. This unauthorized data collection directly harms individuals' privacy rights and breaches applicable laws. Therefore, this qualifies as an AI Incident due to the realized violation of rights and harm to individuals. The company's response is a follow-up action and does not negate the incident classification.
Thumbnail Image

喜茶回应使用"万店掌"监控一事:仅用于常规监控,不具备人脸识别功能

2021-03-16
金融界网
Why's our monitor labelling this an incident or hazard?
The event centers on the use of monitoring technology that may or may not include AI-based facial recognition. The brand denies AI system involvement in facial recognition or personal data collection, and no harm or violation is directly linked to their use of the system. The article mainly reports on the brand's response to allegations and clarifies the nature of their monitoring equipment. There is no indication of realized harm or plausible future harm caused by AI systems in this specific case. Therefore, this is Complementary Information providing context and response to a broader AI-related concern rather than a new AI Incident or Hazard.
Thumbnail Image

315曝光隐私泄露,数字经济时代下,什么科技能保护我们的隐私?

2021-03-18
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems like facial recognition technology that have directly led to privacy breaches, which constitute violations of fundamental rights (privacy). It references concrete incidents such as the Cambridge Analytica scandal and a data breach at Chowbus, both involving AI or data-driven systems causing harm to users. These are clear AI Incidents due to realized harm. Additionally, the article discusses privacy computing as a technological response, which is complementary information enhancing understanding of mitigation efforts. However, the main focus is on the harms already caused by AI systems and data misuse, qualifying the article primarily as an AI Incident.
Thumbnail Image

蓝鲸315丨"人脸识别"被央视点名,商汤等AI视觉故事如何继续

2021-03-16
金融界网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used by companies to capture biometric data without user consent, which constitutes a violation of personal privacy and legal rights. The harm is realized as customers' biometric data is collected without knowledge, posing risks to their privacy and property security. The article also mentions legal actions and regulatory frameworks addressing these harms. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's use and misuse.
Thumbnail Image

万店掌回应被央视点名:已连夜成立专项组,紧急开展自查

2021-03-16
金融界网
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (facial recognition technology) used in a way that has directly caused harm by capturing sensitive personal biometric data without informed consent, violating privacy rights and potentially other legal protections. The scale of data collection (over 100 million facial data points) and the use of blacklists further indicate significant rights violations. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

科勒:摄像设备仅作到店人数统计 已安排连夜拆除

2021-03-16
金融界网
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here for visitor counting. The issue was publicly exposed, prompting the company to act by removing or disabling the devices and ensuring no data misuse. There is no explicit report of harm such as privacy violations or legal breaches occurring, only the potential for such harm which is being addressed. Therefore, this is not an AI Incident or Hazard but a report on mitigation and response, fitting the definition of Complementary Information.
Thumbnail Image

315曝光的摄像头,喜茶等大店都在用?网友炸了:300块的海康威视不用,用一千多的杂牌?紧急回应来了

2021-03-17
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used by companies to collect sensitive biometric data such as facial features and emotional states. The unauthorized or non-consensual collection of such data constitutes a violation of personal privacy rights, which is a breach of applicable law protecting fundamental rights. The exposure of these practices and the public backlash indicate that harm has occurred or is ongoing. The involvement of AI in the development and use of these facial recognition systems directly leads to the harm described. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

央视3·15揭露的"卖脸"公司有来头,老板布局广,投资方多

2021-03-16
163.com
Why's our monitor labelling this an incident or hazard?
The companies mentioned deploy AI systems (facial recognition and analysis) that capture and analyze personal biometric data without customer consent, directly violating privacy rights and legal protections. The AI system's use has directly led to harm in the form of unauthorized data collection and potential misuse of sensitive personal information. This fits the definition of an AI Incident as it involves the use of AI systems causing violations of human rights and legal obligations protecting fundamental rights.
Thumbnail Image

你的脸正在被偷走,你却对此无能为力

2021-03-16
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly details the use of AI-based facial recognition systems by various companies to collect and analyze personal biometric data without user consent, which is a violation of privacy and personal information rights under applicable laws. The misuse and illegal sale of this data have led to realized harms such as privacy breaches, potential identity theft, and fraud. The AI system's development and use are central to the incident, and the harms are direct and significant. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

?"刚进门的是个穷屌丝!"这些网红店在偷窥神器后嘲笑你......?

2021-03-17
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras and associated software) used for customer tracking and profiling. The use of these AI systems has directly led to violations of privacy and consumer rights, as customers were not informed and their biometric data was collected and shared without consent. The harm is realized, as evidenced by regulatory actions and public backlash. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy obligations caused by the AI system's use.
Thumbnail Image

ImageNet决定给人脸打码,却让哈士奇图片识别率猛增

2021-03-17
163.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the modification of an AI dataset (ImageNet) to blur faces for privacy protection and the impact of this on AI model performance. It also covers ethical and privacy concerns raised by researchers and the AI community's response. There is no indication of an AI system causing harm or a credible risk of harm occurring or imminently occurring. The event is about governance, ethical improvements, and research findings related to AI datasets, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

遵守边界,人脸识别技术不能触碰隐私底线_辣言辣语_红辣椒评论

2021-03-16
红网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) that have directly led to violations of privacy rights and unauthorized data collection, which constitutes a breach of obligations under applicable law protecting fundamental rights. The unauthorized and covert collection of biometric data without consent is a clear harm to individuals' privacy and security, fitting the definition of an AI Incident. The article details realized harm rather than potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

正通汽车4S店连夜拆摄像头 称仅用于内部营销分析

2021-03-17
网易车讯
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used without consent to collect biometric data, which is a violation of personal information protection laws and fundamental rights. The harm is realized as the illegal collection of sensitive personal data, constituting a breach of legal obligations and human rights. The company's subsequent removal of the cameras and public apology are responses to the incident but do not negate the fact that harm occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

四家AI企业被央视315点名!大型"偷脸"事件曝光

2021-03-16
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of facial recognition technologies deployed by several companies that secretly collect and misuse biometric data without user consent, violating legal frameworks such as the Civil Code and personal information protection laws. The harms include privacy violations, potential financial loss, and risks to personal safety if data is leaked or misused. The direct involvement of AI in these harms, the scale of data collected, and the legal breaches confirm this as an AI Incident rather than a hazard or complementary information. The event details realized harm rather than just potential risk, fulfilling the criteria for an AI Incident.
Thumbnail Image

"3·15"晚会点名万店掌滥用人脸识别技术,朱啸虎任职该公司董事

2021-03-15
163.com
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system. The report highlights the misuse of this technology by a company, which directly implicates violations of privacy and potentially human rights. This constitutes harm under the framework's category (c) - violations of human rights or breach of obligations under applicable law. Therefore, this event qualifies as an AI Incident due to the realized harm from misuse of an AI system.
Thumbnail Image

315晚会吃瓜:收集人脸公司有知名投资方,公关在KTV待命,特斯拉没资格上

2021-03-15
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses companies providing facial recognition services that collect and process biometric data, which involves AI systems. The harms described include violations of personal privacy and data security, which are breaches of fundamental rights and legal protections. These harms have already occurred as evidenced by the exposure and regulatory actions such as app removals. The involvement of AI in facial recognition and data processing is central to the incident. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the use of AI systems in personal data collection and privacy violations.
Thumbnail Image

涉及人脸识别、瘦肉精、360搜索等,今年的315晚会曝光了这些问题!

2021-03-15
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems collecting biometric data without consent, which is a violation of personal rights and privacy, constituting harm. The recruitment platforms' algorithmic systems enable unauthorized access to personal data, leading to privacy breaches. The mobile apps use AI-driven user profiling to target vulnerable elderly users with misleading content, causing financial and psychological harm. The search engines' AI-based ad placement systems facilitate the spread of false medical advertisements, posing health risks. These harms are direct and significant, involving violations of human rights and harm to communities. Hence, the event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

今年有32家企业网站门店被央视3·15点名!

2021-03-16
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions facial recognition companies and apps that collect personal data without consent, which involves AI systems. The harms include violations of privacy and consumer rights, which are breaches of fundamental rights. The harms are realized and have led to official exposure and responses. Hence, the event meets the criteria for an AI Incident due to direct involvement of AI systems causing harm to rights and privacy.
Thumbnail Image

偷脸贼滥用人脸识别,真是脸都不要了

2021-03-18
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system—facial recognition technology—used to capture and process biometric data without proper consent, leading to privacy violations and potential harm to individuals' rights. The misuse of this AI system has directly caused harm by enabling unauthorized data collection, profiling, and potential security risks. The article documents realized harm (privacy breaches and unauthorized surveillance), not just potential harm, and thus fits the definition of an AI Incident rather than a hazard or complementary information. The detailed description of the system's deployment, the harm caused, and the public exposure and responses confirm this classification.
Thumbnail Image

央视“3·15”晚会曝光问题汇总

2021-03-16
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems such as facial recognition technology and AI-driven recruitment platforms. The unauthorized collection and use of biometric data without consent is a direct violation of personal rights and privacy, constituting harm under the framework. The illegal downloading and resale of personal data from recruitment platforms facilitated by AI systems also breaches privacy and labor rights. The profiling and targeting of elderly users by AI-powered apps leading to scams is a direct harm to health and property of vulnerable groups. These harms are realized and documented, meeting the criteria for AI Incidents. Other issues reported, while serious, do not involve AI systems or their misuse and thus are not classified as AI Incidents.
Thumbnail Image

新民快评|必须给人脸识别应用场景设限

2021-03-16
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (facial recognition technology) that has directly led to violations of personal privacy and potential risks to personal safety and property, which constitute harms under the AI Incident definition (violations of rights and harm to persons). The article reports actual occurrences of unauthorized facial data collection and misuse, not just potential risks. Therefore, this qualifies as an AI Incident.
Thumbnail Image

315曝光:喂药养羊,简历被贩卖、“瘦身”钢筋……

2021-03-16
星岛环球网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems installed in stores without consumer consent, leading to unauthorized collection and tracking of personal biometric data, which constitutes a violation of privacy rights. It also details how AI is used to profile elderly users to push misleading and potentially harmful advertisements, causing direct harm to consumers. The unauthorized downloading and sale of personal resumes from job platforms involve AI systems managing and distributing personal data, leading to breaches of labor and privacy rights. These harms are realized and ongoing, meeting the criteria for AI Incidents. Other consumer issues reported do not involve AI and are thus outside the AI harm framework.
Thumbnail Image

人脸信息一旦泄露难以逆转 不想"换脸"怎么办

2021-03-18
中国经济网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition systems by commercial brands to collect sensitive biometric data without user consent, violating legal principles and personal privacy. The harm is realized as consumers' biometric data is captured and used covertly, exposing them to lifelong risks since biometric data cannot be changed once leaked. This constitutes a violation of fundamental rights and legal obligations protecting personal information, fitting the definition of an AI Incident. The involvement of AI systems in the development and use stages is clear, and the harm is direct and significant.
Thumbnail Image

多商家被曝"偷脸" 人脸识别监控错在哪?

2021-03-18
中国经济网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used in retail environments to collect and analyze biometric data. The use of these AI systems has directly led to violations of personal privacy and data protection laws, which are recognized as breaches of fundamental rights. The harm is realized as consumers' biometric data is collected without informed consent, constituting a clear violation of human rights and privacy. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and privacy).
Thumbnail Image

科勒回应人脸识别摄像头 向广大消费者道歉

2021-03-16
中国经济网
Why's our monitor labelling this an incident or hazard?
Facial recognition cameras are AI systems that process biometric data. The company's use of these cameras for visitor counting involves AI use. The event involves the company's response to concerns raised by a media report, including apologies and remedial measures. There is no indication that harm has occurred or that the AI system malfunctioned or was misused to cause harm. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about societal and governance responses to AI-related privacy concerns.
Thumbnail Image

周鸿朱啸虎被卷入 各企业回应

2021-03-16
中国经济网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems by companies that illegally collect personal biometric data without consent, which is a violation of privacy rights (an AI Incident). The recruitment platforms' illegal selling of resumes involves AI-driven data processing and platform algorithms, leading to personal data breaches and rights violations (AI Incident). The false medical advertisements on search platforms involve algorithmic promotion and ranking, causing harm to public health and consumer trust (AI Incident). These harms are realized and directly linked to AI systems' use or misuse. The automotive and food safety issues, while serious, do not involve AI systems and thus are not classified as AI Incidents or Hazards.
Thumbnail Image

#喜茶回应使用万店掌摄像头#等话题上热搜 网友担忧隐私被侵害

2021-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used by companies to collect personal data. The reported illegal collection of facial data constitutes a violation of personal privacy rights, which falls under violations of human rights or breaches of applicable laws protecting fundamental rights. Since the AI system's use has directly led to harm (privacy violations), this qualifies as an AI Incident.
Thumbnail Image

滥用“人脸识别”前请先扪心自“识”

2021-03-17
中国经济网
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose misuse has directly led to violations of privacy rights and harm to individuals' legitimate interests. The unauthorized collection and use of facial data constitute a breach of personal data protection laws and infringe on fundamental rights. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

隐私侵犯、数据滥用......人脸识别引发美民众担忧!这些高科技企业却反应不一

2021-03-18
证券时报网_证券时报旗下资讯平台_股票_基金_期货_债券_理财_财经_行情_数据_股吧_博客_论坛
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically facial recognition technology, which is a form of AI system as it infers from input images to generate outputs such as identity recognition. The harms described include violations of privacy rights, illegal data collection, racial bias leading to wrongful identification, and unauthorized surveillance, all of which constitute violations of human rights and privacy laws. These harms are ongoing and have led to lawsuits and legislative responses, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident because the development and use of facial recognition AI systems have directly and indirectly led to significant harms to individuals' rights and privacy.
Thumbnail Image

央视315点名!事涉宝马、360等公司 看它们如何回应

2021-03-16
internet.cnmo.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI systems such as facial recognition cameras that capture and analyze biometric data without consent, constituting a violation of privacy and human rights. The recruitment platforms' illegal sale of personal data involves AI-driven data processing and breaches labor and privacy rights. The false medical advertisements disseminated via AI-powered ad platforms cause direct harm to consumer health. The harms are realized and documented, with companies issuing apologies and remedial actions. Hence, the event meets the criteria for an AI Incident involving direct harm and rights violations linked to AI system use and misuse.
Thumbnail Image

红网记者探访多家科勒卫浴长沙门店 已连夜拆除整改摄像装置_经济.民生_湖南频道

2021-03-16
红网
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-based facial recognition cameras in stores capturing customers' facial data without consent, which is a violation of personal data protection laws and individual rights. The harm has already occurred as customers' biometric data was collected without authorization. The company's subsequent removal of the devices is a response to the incident but does not negate the fact that the AI system's use led to a rights violation. Therefore, this qualifies as an AI Incident due to the realized harm involving violation of personal rights through AI system misuse.
Thumbnail Image

刚去过科勒,我的"人脸"信息如何保障?_谈经论政_红辣椒评论

2021-03-18
红网
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and use of AI facial recognition systems in retail stores that capture and process customers' biometric data without consent or notification, which directly violates personal information protection laws and individuals' rights. This constitutes an AI Incident because the AI system's use has directly led to violations of human rights and legal obligations concerning personal data privacy. The article also mentions the company's apology and plans to remove the equipment, but the harm from unauthorized data collection has already occurred. Therefore, this is not merely a hazard or complementary information but an AI Incident involving realized harm.
Thumbnail Image

科勒卫浴抓拍人脸?攸县检察院迅速出击,保护人脸信息安全_社会.法治_湖南频道

2021-03-18
红网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology (an AI system) to capture and process biometric data without proper consent, which constitutes a violation of personal information rights and privacy. This unauthorized collection and processing of sensitive biometric data has already occurred, causing harm to individuals' privacy and potentially their property security. The legal actions taken by the procuratorate are responses to this realized harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing violations of human rights and personal data protection laws.
Thumbnail Image

去个商场,不知不觉把"脸"丢了_辣言辣语_红辣椒评论

2021-03-16
红网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through facial recognition technology used in surveillance cameras to collect and analyze sensitive biometric data without consent, which constitutes a violation of personal rights under applicable laws. This misuse of AI has directly led to a breach of fundamental rights related to privacy and data protection, fulfilling the criteria for an AI Incident under violations of human rights or legal obligations protecting personal information.
Thumbnail Image

拥有大量人脸数据 摄像头厂家被查

2021-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems—facial recognition technologies used by camera manufacturers and businesses. The unauthorized collection and storage of facial data constitute a violation of personal privacy and consumer rights, which falls under harm category (c) violations of human rights or breach of legal obligations protecting fundamental rights. The involvement of AI in the development and use of these facial recognition systems directly led to these harms, and regulatory investigations confirm the seriousness of the incident. Therefore, this qualifies as an AI Incident.
Thumbnail Image

央视“3.15”晚会后 成都科勒门店人脸识别摄像头已被紧急撤下晚会结束后,涉事企业和有关部门先后发布声明和开展调查。对于被网友广泛提到的商店摄像头涉嫌违法采集个人信息一事,在成都的科勒门店有没有开展进一步的处置措施?消费者如何有效防止个人信息不外泄?

2021-03-16
四川在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition cameras, which are AI systems processing biometric data. The cameras were allegedly used without proper consent, constituting illegal collection of personal sensitive information, a violation of privacy rights. This has led to investigations and removal of the cameras, indicating harm has occurred. The involvement of the AI system (facial recognition) directly led to the privacy violation harm. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

【详细】

2021-03-16
四川在线
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (facial recognition cameras with AI capabilities) used by businesses to capture biometric data without consent, violating legal and ethical standards protecting personal information. The harm is realized as individuals' privacy rights are breached, and there is a direct link between the AI system's use and the harm caused. This fits the definition of an AI Incident under violations of human rights and applicable law protecting fundamental rights. The article details actual harm occurring, not just potential risk, so it is not an AI Hazard or Complementary Information. It is not unrelated as AI systems are central to the issue.
Thumbnail Image

信息安全再受关注 众多大牌企业被3·15晚会点名

2021-03-16
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems capturing biometric data without consent, mobile apps collecting personal data and manipulating users, and the illegal trade of personal resumes obtained from recruitment platforms. These activities have led to realized harms including privacy violations, potential fraud, and exploitation of vulnerable populations. The involvement of AI systems in data collection, profiling, and unauthorized surveillance directly contributes to these harms, meeting the criteria for an AI Incident under violations of human rights and personal data protection laws.
Thumbnail Image

吴晨/文 今年315晚会的第一弹就剑指人脸识别被商家滥用。调查报道中发现,滥用人脸识别,标注线下门店顾客,帮助零售企业进行客户管理,已经成为庞大的生态,有万掌门、优络客等服务提供商,也有科勒、宝马这样的全球品牌在采用,一家服务提供商就宣称自己已经搜集了上亿人脸,每个人生物隐私信息之被滥用,堪比十年前手机号被贩卖的乱象。

2021-03-16
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI-based facial recognition systems by retailers and third-party service providers to identify customers and collect biometric data without consent, leading to significant privacy violations. This constitutes a breach of fundamental rights and harms communities by enabling unauthorized surveillance and data exploitation. The AI system's use is central to the harm described, meeting the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential.
Thumbnail Image

3月15日晚,央视2021年“3・15”晚会曝光苏州万店掌网络科技有限公司(以下简称“万店掌”)违规抓取人脸数据。据报道,商家在线下门店安装的人脸识别设备,可以直接抓取用户包括性别,脸部特征,甚至情绪状态在内的人脸信息。

2021-03-16
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-powered facial recognition systems that have been used to collect sensitive biometric data without user consent, violating privacy rights and legal regulations. The harm is realized as it breaches fundamental rights and legal protections concerning personal data. The AI system's use directly leads to this violation, fulfilling the criteria for an AI Incident. The subsequent responses by companies and regulatory references further confirm the recognition of harm and legal breaches.
Thumbnail Image

财联社3月15日讯,2021年央视“3·15”晚会开播,节目内容涉及人脸识别、招聘平台、老年人手机、瘦肉精、医疗广告等问题。

2021-03-16
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: facial recognition cameras installed in stores collecting biometric data without consent, and algorithmic systems on recruitment platforms and search engines facilitating unauthorized data access and fraudulent ads. These AI systems' use has directly led to violations of privacy rights, labor rights, and harm to consumers through misleading medical ads, fulfilling the criteria for an AI Incident. The harms are realized and ongoing, not merely potential. The article also mentions company responses, but the main focus is on the exposure of these harms caused by AI system misuse and failures, not just complementary information.
Thumbnail Image

科勒卫浴、宝马、MaxMara商店安装人脸识别摄像头,海量人脸信息已被搜集!

2021-03-17
3news.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems—facial recognition technology—that are used to capture and analyze biometric data without informed consent, which is a violation of privacy and legal frameworks protecting personal data. The unauthorized and covert use of these AI systems to collect and store massive amounts of sensitive facial data constitutes a breach of fundamental rights and privacy, fulfilling the criteria for an AI Incident under violations of human rights and applicable law. The harm is realized and ongoing, not merely potential, as the data collection is actively happening without consent.
Thumbnail Image

央视"3·15"晚会曝光的"偷脸"没有那么简单

2021-03-16
网易
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) to collect biometric data without consumer knowledge or consent, which constitutes a violation of personal privacy rights and applicable laws. This unauthorized data collection has directly led to harm in terms of privacy violations and breaches of legal obligations protecting personal and sensitive information. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in collecting facial data without consent.
Thumbnail Image

独家 时评|科勒违规采集人脸信息遭曝光:别光盯着别人的脸,更... 时评 4406 58分钟前

2021-03-16
thehour.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition AI systems to collect biometric data without consent, which is a violation of personal privacy and legal rights. The collection and processing of biometric data without consent is a breach of the Civil Code provisions protecting personal information. The harm is realized as the data has been collected and potentially misused, and the incident has been publicly exposed. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to personal data protection.
Thumbnail Image

无此功能 天津的科勒宝马门店监控没"偷脸"

2021-03-17
enorth.com.cn
Why's our monitor labelling this an incident or hazard?
The article discusses the use and removal of facial recognition AI systems in retail stores, which are AI systems capable of processing personal data. However, it does not report any realized harm such as privacy violations or legal breaches occurring at the stores mentioned, especially the Tianjin BMW store which explicitly denies having such systems. The main focus is on clarifying facts and reporting official responses to a prior incident. Therefore, this is Complementary Information as it provides updates and context related to a previously reported AI Incident but does not itself describe a new incident or hazard.
Thumbnail Image

万店掌违规抓取人脸信息遭央视315曝光 华润、海信等为其主要客户

2021-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition cameras) that automatically captures and processes biometric data without consumer consent, leading to violations of privacy and personal rights. This fits the definition of an AI Incident as it directly leads to a breach of obligations under applicable law protecting fundamental rights. The company's response and the exposure by CCTV 315 further confirm the harm has occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

“有人偷走我的脸”:人脸识别时代的忧与怕

2021-03-17
cb.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI facial recognition systems in retail stores to collect sensitive biometric data without user consent, violating legal and ethical standards. This misuse directly harms individuals' privacy rights and breaches laws such as the Personal Information Security Specification and the Civil Code. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident involving violations of human rights and legal obligations. The article also calls for regulatory enforcement to address these harms, confirming the realized nature of the incident rather than a potential hazard or complementary information.
Thumbnail Image

九大黑幕曝光!周鸿祎朱啸虎被卷入,宝马、360、前程无忧等紧急回应,监管连夜出手

2021-03-16
iceo.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as facial recognition technology and algorithmic data processing in recruitment platforms and advertising. The unauthorized collection and sale of personal data, as well as the spread of false medical ads through AI-driven platforms, directly violate human rights and legal protections related to privacy and consumer safety. These harms have already occurred and are linked to the development and use of AI systems, meeting the criteria for AI Incidents. The article also details responses from companies and regulators, but the primary focus is on the harms caused, not just the responses, confirming the classification as AI Incidents.
Thumbnail Image

澳大利亚 - 央视3.15晚会炮轰商家收集人脸识别信息 讽刺的是... | 澳洲唐人街Australia ChinaTown News 中文华人新闻 - NEWS.CHINA.COM.AU 这里是生活在墨尔本咱自家人的地盘!把客场当主场,视异乡为故园。澳洲唐人街 - 中华澳网 China Town News澳洲唐人街;澳大利亚华人社区和主流新闻媒体

2021-03-18
澳洲唐人街
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) in collecting and processing biometric data on a massive scale. The unauthorized collection of facial data by businesses without consumer consent directly leads to privacy violations and potential harm to users' property and personal security. The creation and deployment of such large datasets and AI models for facial recognition, especially when done without proper consent or safeguards, fits the definition of an AI Incident due to violations of human rights and privacy. Although the dataset release is framed as a technological achievement, the underlying issue of mass data collection without consent and the associated privacy risks are central, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

人脸识别技术被滥用,咋办?

2021-03-17
环球网
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data to identify individuals. The article describes merchants using this AI system without informing or obtaining consent from customers, which constitutes a violation of privacy and potentially human rights. This misuse has directly led to harm by unauthorized data collection and risks of data leakage and identity fraud. Therefore, this event qualifies as an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

2021-03-17
zgswcn.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems, specifically facial recognition technology, which automatically collects and processes biometric data without user consent, leading to privacy breaches. It also details the leakage of personal information from recruitment platforms that use AI for data management. These events have directly caused harm to individuals' privacy and security, constituting violations of rights and harm to communities. The article further highlights the misuse of AI-generated facial data for criminal activities, reinforcing the direct link between AI system use and realized harm. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

中国信通院倡议发起“可信人脸识别守护计划”

2021-03-19
新浪财经
Why's our monitor labelling this an incident or hazard?
The article focuses on the launch of an initiative to develop standards and guidelines for trustworthy facial recognition AI technology. It does not report any specific incident or harm caused by AI, nor does it describe a plausible future harm event. Instead, it is about governance and industry response to AI technology, which fits the definition of Complementary Information.
Thumbnail Image

无处不在的面部识别技术,究竟"恐怖"在哪里?

2021-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (facial recognition technology) being used to collect and analyze biometric data without consent, violating personal information security laws and infringing on privacy rights. The harm is realized as unauthorized surveillance and data collection, which breaches human rights and legal obligations. The AI system's role is pivotal in enabling this harm, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

又是一年315 被围猎的用户由谁来保护?

2021-03-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems such as facial recognition technology being used without consent to collect sensitive biometric data, which constitutes a violation of personal privacy and legal protections. The misuse of recruitment platform data facilitated by AI-enabled data processing leads to large-scale personal information leaks and illegal sales, which is a criminal violation of rights. The use of AI in targeted advertising and search engine manipulation causes misinformation and consumer deception, harming communities and individuals. These harms have already occurred, making this an AI Incident. The article also includes company apologies and regulatory responses, but the primary focus is on the realized harms caused by AI system misuse and illegal data collection.
Thumbnail Image

律师解读商家、物业和车企采集人脸信息:消费者应坚决依法维权

2021-03-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems through the use of facial recognition technology, which is an AI system that processes biometric data to identify individuals. It describes the development and use of these AI systems by merchants, property managers, and car manufacturers to collect facial data without proper consent, leading to violations of privacy rights and potential financial harm. The harms described include violations of human rights (privacy), risks of data breaches, and misuse of sensitive biometric information, which align with the definition of AI Incidents. The article also mentions actual past incidents (e.g., CCTV exposure and subsequent apologies and camera removals) indicating realized harm rather than just potential harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

正通汽车4S店连夜拆除摄像头 员工称仅用于内部营销分析

2021-03-17
新浪车行天下
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system—facial recognition technology—to collect personal biometric data without consent, which is a violation of privacy rights and applicable laws. The harm is realized as the illegal collection and processing of personal data, constituting a breach of fundamental rights. The company's removal of the cameras and self-investigation are responses but do not negate the fact that the AI system's use caused harm. Hence, this qualifies as an AI Incident due to violations of human rights and legal obligations related to data privacy.
Thumbnail Image

你的脸正在被偷走,你却对此无能为力

2021-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based facial recognition systems by various companies to collect and analyze personal biometric data without user consent, which directly leads to violations of privacy rights and personal information security. The harms are realized, including illegal data collection, potential resale of sensitive data, and the creation of a black market for facial data, all linked to AI system use. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and significant harm to individuals' privacy and security.
Thumbnail Image

被"3·15"点名,这些企业如此回应!你接受吗?

2021-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology, which is an AI system, to collect and analyze consumer facial data secretly. This AI system's use has directly led to violations of consumer privacy rights, a breach of fundamental rights protected by law. The harm is realized as consumers' biometric data was collected without consent and used to manipulate consumer behavior ('big data killing' or differential pricing). Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm to rights and privacy. Other issues mentioned, such as illegal sale of resumes and misleading advertising, while serious, do not explicitly involve AI systems or their misuse as described. The automotive defects and food safety issues are unrelated to AI systems.
Thumbnail Image

MaxMara回应门店摄像监控:设备仅为客流量统计 目前已移除

2021-03-18
fashion.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of facial recognition technology, an AI system, to collect and process customers' facial data without their knowledge or authorization, which is a violation of personal privacy rights. This constitutes harm under the category of violations of human rights or breach of obligations intended to protect fundamental rights. The fact that regulatory authorities are investigating and the cameras have been removed confirms the harm has materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

媒体:多商家被曝"偷脸" 人脸识别监控错在哪?

2021-03-18
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) that automatically process biometric data to profile customers without their consent, which constitutes a violation of personal information rights under applicable laws. This unauthorized data collection and processing directly harm individuals' privacy rights, a form of human rights violation. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and potential privacy breaches). The article also highlights legal and regulatory responses but the primary focus is on the realized harm from the AI system's misuse.
Thumbnail Image

315曝光数据隐私 智能家电应重视安全系统与后续保障

2021-03-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The article focuses on the risks and potential harms associated with AI systems embedded in smart home devices, such as facial recognition and network-connected cameras, which could lead to privacy breaches and security incidents. Since no specific harm or incident is described as having occurred, but the potential for harm is clearly articulated, this qualifies as an AI Hazard. It also includes calls for better security practices and consumer protection, but these are responses to the hazard rather than reports of an incident or complementary information about a past event.
Thumbnail Image

3·15网络安全四连炮 | 智能摄像滥用,个人简历泄露,手机垃圾软件,搜索乱象纷纷上榜!

2021-03-16
4hou.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (facial recognition cameras, data profiling apps, algorithmic ad placement in search engines) whose use or misuse has directly caused harm such as unauthorized biometric data collection, privacy violations, targeted scams, and deceptive advertising. The harms include violations of personal rights and potential financial and psychological harm to individuals, especially vulnerable groups like the elderly. The AI systems' development and use are central to these harms, meeting the criteria for AI Incidents rather than hazards or complementary information.
Thumbnail Image

喜茶、科勒(中国)回应使用人脸识别摄像头,产品供应商万店掌:已下架自查

2021-03-16
xdkb.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used in retail stores to capture and process biometric data. The collection of such sensitive personal information without proper consent breaches legal requirements and personal rights, constituting harm under the framework. The supplier's acknowledgment, product removal, and public apologies confirm the incident's materialization. Hence, this is an AI Incident involving violations of personal data protection laws and human rights.
Thumbnail Image

3.15晚会曝光企业苏州万店掌回应:已连夜成立专项组开展自查

2021-03-16
xdkb.net
Why's our monitor labelling this an incident or hazard?
The article focuses on the company's response to a media report and its commitment to data security and investigation. There is no explicit mention or reasonable inference of an AI system causing harm or posing a plausible risk of harm. The event is primarily about the company's remedial actions and public communication, which fits the category of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

万店掌收集人脸信息被点名,喜茶也是其客户

2021-03-16
网易科技
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition cameras by WanDianZhang to illegally collect consumer personal information, which is a breach of privacy rights and legal protections. This is a direct harm caused by the AI system's use. The responses from the brands indicate awareness but do not negate the fact that the AI system was used in a way that led to harm. Therefore, this qualifies as an AI Incident due to violation of rights through the use of AI facial recognition technology.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:将尽快反馈

2021-03-16
网易
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment of AI systems (facial recognition cameras) that collect and process biometric data of customers without their knowledge or consent. This unauthorized data collection infringes on privacy rights and fundamental human rights, constituting harm under the framework. The involvement of AI in the development and use of these facial recognition systems is clear, and the harm (privacy violation) is realized. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

发展:从“生存权”到“隐私权”。—— “生存权”很好理解,说白了就是“活着的权利”;而本文所说的“隐私权”,则是指包括了姓名、肖像、家庭住址、联系方式以及行动轨迹等等不想对外公开的个人信息,这一权利同样应该得到尊重和保护。然而近年来随着信息科技手段的普及使用,人们的隐私权正面临越来越难防的侵犯。 今年央视3·15晚会曝光的案例中,大概要数一些购物场所的人脸识别系统所引起的“热议度”最高。央视记者在...

2021-03-19
caiweiming.blog.caixin.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition) that have directly led to violations of privacy rights, a form of harm to fundamental rights under applicable law. The unauthorized collection and processing of biometric data without informed consent is a clear breach of personal information protection laws and constitutes an AI Incident. The article details realized harm through privacy infringements caused by AI system use, not merely potential or hypothetical risks. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

央视3·15点名科勒卫浴:摄像头未经允许捕捉人脸信息

2021-03-17
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-based facial recognition technology to collect biometric data without consumer consent, violating legal frameworks protecting personal information and privacy. This unauthorized data collection constitutes a breach of fundamental rights and legal obligations, fitting the definition of an AI Incident due to the direct harm to individuals' rights and privacy caused by the AI system's use.
Thumbnail Image

你的脸正在被偷走,你却对此无能为力_详细解读_最新资讯_热点事件_36氪

2021-03-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) that have directly led to violations of privacy and personal information rights, which are protected under law. The unauthorized collection and potential resale of facial data have caused real harm to individuals' privacy and security, fulfilling the criteria for an AI Incident. The article describes actual realized harm rather than potential harm, and the AI system's role is pivotal in enabling this harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

315晚会曝光全名单:数字化,信息化野蛮时代终结

2021-03-16
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered facial recognition technology to capture and analyze consumer data without consent, as well as the illegal downloading and distribution of personal resume data from recruitment platforms, which rely on AI-driven data processing. These activities have directly resulted in violations of privacy and consumer rights, fulfilling the criteria for harm under the AI Incident definition. The involvement of AI systems in these harms is clear and direct, and the harms have materialized, not merely potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

名表维修被曝光 11家店仅西安成都2店正常维修

2021-03-15
华商网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems by stores like Kohler and others to collect biometric data without consent, which is a violation of privacy rights and thus an AI Incident under the framework. Additionally, the mobile cleaning apps perform automated, high-frequency data collection and user profiling, indicative of AI or algorithmic systems, leading to deceptive practices harming elderly users, also qualifying as an AI Incident. Other parts of the article describe consumer fraud, product safety issues, and deceptive advertising that do not involve AI systems or AI-related harms, so they are not classified as AI Incidents or Hazards. The overall report is primarily about consumer protection but includes significant AI-related harms, justifying classification as AI Incident.
Thumbnail Image

3·15曝光企业回应及进展

2021-03-17
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (artificial intelligence cameras) used by Suzhou Wandianzhang for illegal facial data collection, which is a violation of personal privacy and data security, thus an AI Incident. Additionally, recruitment platforms' misuse of AI systems to handle personal data leading to data breaches and the use of AI-driven search engines to disseminate false medical ads causing consumer harm also qualify as AI Incidents. The harms are realized, not just potential, and the AI systems' development, use, or malfunction directly or indirectly led to these harms. The article also covers responses and investigations but the primary focus is on the incidents themselves, so the classification is AI Incident rather than Complementary Information.
Thumbnail Image

保护隐私安全,神指宝盒为科技做"减法"

2021-03-17
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of facial recognition technology, which is an AI system that processes biometric data to identify individuals. The unauthorized collection and storage of facial data without consent constitute a violation of privacy rights, a breach of fundamental rights protected by law, thus meeting the criteria for an AI Incident. The article details realized harm to individuals' privacy and potential threats to their security due to AI misuse and data breaches. The mention of a privacy-protecting product is complementary information about responses to these harms but does not negate the incident classification. Therefore, the primary classification is AI Incident.
Thumbnail Image

被315晚会点名批评之后:谁在慌乱整改?谁在头铁硬扛?

2021-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in facial recognition cameras installed in stores without proper consent, which constitutes a violation of personal privacy rights (a breach of fundamental rights). Additionally, AI-driven advertising and search algorithms on UC Browser and 360 Search are implicated in spreading false medical advertisements, causing harm to users through misinformation. These are direct harms caused by AI system use. Therefore, these events qualify as AI Incidents. Other issues reported, while serious, do not involve AI systems and thus do not affect the classification. The presence of realized harm linked to AI system use and misuse justifies classification as AI Incident.
Thumbnail Image

“3·15”晚会曝光:科勒门店私装人脸识别功能摄像头,海量搜集人脸信息

2021-03-15
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The use of facial recognition cameras involves AI systems that process biometric data. The covert collection of customers' facial data without their knowledge or consent constitutes a violation of privacy and potentially breaches legal protections related to personal data and human rights. This is a direct harm related to violations of rights under applicable law, thus qualifying as an AI Incident.
Thumbnail Image

315晚会|摄像头泄露人脸信息,苏州市监部门已着手调查生产商

2021-03-15
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI facial recognition systems to collect biometric data without informing or obtaining consent from individuals, which is a violation of human rights and applicable data protection laws. The harm is realized as personal data has been secretly collected, constituting a breach of obligations intended to protect fundamental rights. The investigation by authorities confirms the seriousness of the issue. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems causing violations of rights.
Thumbnail Image

蓝鲸315丨万店掌等人脸识别设备随意抓取人脸信息被点名,涉及性别、年龄等隐私与财产安全

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition devices that capture sensitive biometric data without individuals' knowledge or consent, infringing on privacy and potentially threatening personal and property security. This is a direct harm caused by the use of an AI system, fitting the definition of an AI Incident under violations of human rights and privacy. The involvement of AI in the development and use of these devices is clear, and the harm is realized as the data was collected without consent and used to blacklist certain individuals, indicating misuse and breach of rights.
Thumbnail Image

隐私保护首当其冲:科勒卫浴被点名,偷用“人脸识别”盗取用户信息已形成完成产业链

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used to capture biometric data without consent, which is a violation of personal data protection laws and fundamental rights. The unauthorized collection and storage of sensitive personal information constitute a breach of obligations under applicable law protecting fundamental rights. This meets the criteria for an AI Incident because the AI system's use has directly led to harm in the form of privacy violations and potential risks to personal security and property.
Thumbnail Image

“3·15”晚会曝光:科勒门店私装人脸识别功能摄像头 海量搜集人脸信息

2021-03-15
3news.cn
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition) is explicitly involved, used without informing customers, resulting in unauthorized mass collection of sensitive biometric information. This use directly leads to violations of human rights and applicable laws protecting privacy and data rights, fitting the definition of an AI Incident.
Thumbnail Image

央视315曝光人脸识别摄像头

2021-03-15
中关村在线
Why's our monitor labelling this an incident or hazard?
The use of facial recognition AI systems to collect and analyze biometric data without user consent directly breaches privacy rights and legal obligations protecting personal data. The AI system's deployment leads to violations of fundamental rights, specifically privacy and data protection, which qualifies as harm under the framework. Therefore, this event is classified as an AI Incident due to the realized violation of rights caused by the AI system's use.
Thumbnail Image

央视“3·15”晚会曝光问题汇总:涉及人脸识别、瘦肉精等

2021-03-15
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of facial recognition AI systems installed in various stores without informing or obtaining consent from consumers, which is a direct violation of personal information security regulations. The AI system's use has directly led to privacy breaches and potential harm to individuals' rights and security, fitting the definition of an AI Incident under violations of human rights and privacy. Other issues in the report do not clearly involve AI systems causing harm. Hence, the event qualifies as an AI Incident due to the unauthorized and non-consensual use of AI facial recognition technology causing harm.
Thumbnail Image

央视“3·15”晚会开播 节目内容涉及人脸识别等

2021-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of AI systems (facial recognition cameras) that collect biometric data without user consent, which is a breach of personal information security laws and infringes on privacy rights. The unauthorized collection and storage of facial data can lead to significant harm to individuals' privacy and security, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations.
Thumbnail Image

反其道而行 腾讯、百度等大厂看上的不是AI换脸,而是"假脸"鉴别生意?_详细解读_最新资讯_热点事件_36氪

2021-03-14
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article describes AI systems involved in face generation and face swapping, their use in various applications, and the societal concerns about privacy and misuse. However, it does not describe a particular event where harm has occurred due to these AI systems, nor does it describe a specific imminent risk or hazard event. Instead, it focuses on the broader context, including the development of detection technologies and regulatory challenges. Therefore, it fits the definition of Complementary Information, as it provides supporting context and updates about AI systems and their societal implications without reporting a new AI Incident or AI Hazard.
Thumbnail Image

购房被拒投诉售楼处,房地产"人脸识别第一案"在慈溪审理_详细解读_最新资讯_热点事件_36氪

2021-03-15
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system—facial recognition technology—deployed by real estate sales offices. The AI system's use has directly led to harm: infringement of privacy rights, lack of informed consent, and discriminatory denial of purchase benefits. These harms fall under violations of human rights and consumer rights, meeting the criteria for an AI Incident. The ongoing legal proceedings and public concern further confirm the materialized harm rather than a mere potential risk. Hence, this is classified as an AI Incident.
Thumbnail Image

《少年的你》提名奥斯卡最佳国际影片_实时热点_热点聚焦_36氪快讯_36氪

2021-03-15
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
Facial recognition systems are AI systems involved in the event. The merchants' use of these systems without informing or obtaining consent from individuals leads to violations of rights, specifically privacy and data protection rights, which fall under human rights and legal obligations. Therefore, this event qualifies as an AI Incident due to the realized harm of rights violations caused by the use of AI systems.
Thumbnail Image

信息安全仍是"315晚会"重灾区:滥用人脸识别、简历泄露、搜索乱象......

2021-03-15
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of facial recognition AI systems by various companies to collect biometric data without consent, which constitutes a violation of privacy rights and poses risks to personal security. Additionally, the unauthorized access and sale of personal resume data from recruitment platforms involve AI-driven data management systems and represent breaches of data protection and labor rights. These harms have already occurred and are directly linked to the development and use of AI systems. Therefore, this qualifies as an AI Incident. Other issues reported, such as food safety and product defects, do not involve AI and are not relevant to this classification.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:已知晓 将尽快反馈

2021-03-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the deployment of AI systems (facial recognition cameras) that collect biometric data without informed consent, violating personal data protection regulations. This unauthorized collection of sensitive biometric information constitutes a violation of human rights and legal obligations. The harm is realized as customers' privacy rights are infringed upon through covert data collection and tracking across multiple stores. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems causing a breach of rights and legal violations.
Thumbnail Image

315晚会惊曝多家知名商店安装人脸识别摄像头:科勒卫浴、宝马均在列

2021-03-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (facial recognition technology) used in commercial settings to collect biometric data. The use of these AI systems without informed consent and proper authorization breaches legal frameworks protecting personal information, thus constituting a violation of rights under the applicable law. The harm is realized as users' privacy and personal data security are compromised. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to personal data protection.
Thumbnail Image

央视315晚会曝光人脸识别滥用:不知情下人脸已被窃取

2021-03-15
驱动之家
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The unauthorized collection and storage of facial data without informed consent constitutes a violation of personal rights and privacy, which falls under violations of human rights and legal obligations. The large scale of data collection and the lack of user awareness indicate direct harm to individuals' rights. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the misuse of AI systems in facial recognition.
Thumbnail Image

315曝光的悠络客是谁?官网称百盛、好利来等为客户,腾讯、用友曾投资

2021-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (facial recognition technology) developed and used by 悠络客 and others to collect and process personal data without informed consent, which is a violation of privacy rights and consumer protections. This constitutes an AI Incident because the AI system's use has directly led to harm in the form of rights violations and privacy breaches. The article details realized harm rather than potential harm, and the AI system's role is pivotal in enabling this harm.
Thumbnail Image

315晚会曝光 科勒卫浴违规窃取人脸数据

2021-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the unauthorized collection of facial data, which implies the use of AI-based facial recognition technology. This unauthorized data collection without consumer consent is a breach of privacy rights and applicable laws protecting personal data, constituting harm under the category of violations of human rights and legal obligations. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in this context.
Thumbnail Image

被315曝光的人脸识别:产业规模达100亿,5000张人脸只要10元

2021-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems to capture and store individuals' facial data without consent, which is a direct violation of privacy rights and legal protections. The unauthorized collection and subsequent sale of this sensitive biometric data cause harm to individuals' rights and personal security. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal obligations protecting fundamental rights.
Thumbnail Image

央视315晚会上这些上海企业被点名,市场局一一介入执法

2021-03-15
The Paper
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions facial recognition systems (AI systems) capturing biometric data without consent, leading to privacy violations, which is a direct harm to individuals' rights. The resale of personal resume data from online platforms also constitutes a breach of privacy and data protection laws. The use of search browsers to display false advertisements likely involves AI-driven ad targeting algorithms causing consumer harm. These harms have materialized and are under investigation by regulatory authorities, confirming the direct or indirect role of AI systems in causing violations of rights and harm to consumers. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315晚会曝光:人脸信息搜集、简历贩卖、又见瘦肉精

2021-03-15
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition systems installed in stores that collect biometric data without informing or obtaining consent from consumers. This unauthorized collection of sensitive personal data constitutes a breach of legal and human rights protections concerning privacy. The AI system's use in this context directly causes harm by infringing on individuals' rights and exposing them to risks related to data misuse or leakage. Therefore, this event qualifies as an AI Incident due to violations of human rights and privacy laws resulting from the AI system's use.
Thumbnail Image

科勒卫浴、宝马4s店、maxmara门店被曝安装摄像头,收集人脸信息

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used to collect biometric data without proper consent, which is a violation of personal information protection laws and users' rights. The harm is realized as unauthorized data collection threatens privacy and security, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations protecting personal data. Therefore, this is classified as an AI Incident.
Thumbnail Image

"3·15"第一枪点名科勒卫浴!这些企业"偷"走你人脸信息!

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based facial recognition systems by multiple companies to collect biometric data without user consent, violating personal information security laws and infringing on privacy rights. The harm is direct and realized, as customers' facial data is captured and stored without their knowledge, posing risks to their privacy and security. The involvement of AI systems in this unauthorized data collection and the resulting privacy violations meet the criteria for an AI Incident under the OECD framework, specifically under violations of human rights and harm to individuals' privacy and security.
Thumbnail Image

科勒回应3·15晚会曝光:"我们不会泄露个人信息"

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used in a way that directly leads to violations of human rights, specifically privacy rights, by collecting and tracking biometric data without informed consent. This constitutes an AI Incident under the framework because the AI system's use has directly led to a breach of obligations intended to protect fundamental rights. The harm is realized as the unauthorized data collection threatens users' privacy and security. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

2021年3·15内容涉及人脸识别科勒卫浴被曝收集顾客人脸信息

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system explicitly mentioned. The event involves the use of these AI systems to collect biometric data without customer consent, which constitutes a violation of human rights and applicable laws protecting privacy. The harm (violation of rights) has already occurred as the data collection is ongoing and unauthorized. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems causing a breach of rights.
Thumbnail Image

聚焦3・15|汽车4S店用“人脸识别”泄露顾客隐私 正通汽车上百家店涉及其中

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems—facial recognition technology used for real-time data capture and analysis. The use of these AI systems has directly led to violations of privacy rights and legal protections concerning biometric data, as customers' facial information is collected without their knowledge or consent. This constitutes a breach of applicable laws and fundamental rights, fulfilling the criteria for an AI Incident under the framework. The harm is realized and ongoing, not merely potential, as the unauthorized data collection and profiling are actively occurring.
Thumbnail Image

人脸信息遭泄露 央视315点名万店掌、悠络客、雅量科技、瑞为4家企业

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves facial recognition technology, which is an AI system. The unauthorized collection and leakage of facial data directly harms individuals' privacy rights and likely breaches applicable laws protecting personal data and human rights. The involvement of multiple companies and stores indicates a systemic issue. Since harm to rights has occurred due to the AI system's misuse, this qualifies as an AI Incident under the framework.
Thumbnail Image

315晚会 | 科勒等公司被曝安装人脸识别摄像头,涉嫌违规收集个人生物识别信息

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as facial recognition technology used in retail and other settings. The use of these AI systems has directly led to violations of personal rights and privacy by collecting sensitive biometric data without consent, which is a breach of legal obligations under personal information protection laws. The harm is realized and ongoing, as customers' biometric data is captured and used without their knowledge or authorization, posing risks to their privacy and security. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and breach of legal obligations protecting personal data.
Thumbnail Image

谁在偷我们的脸?万店掌、悠络客、雅亮科技、瑞为等人脸识别企业被央视315点名

2021-03-15
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition systems by multiple companies to collect and process biometric data without customers' knowledge or consent. This unauthorized data collection constitutes a violation of personal information rights and legal regulations, which is a breach of obligations intended to protect fundamental rights. The AI system's use directly leads to this harm. Hence, the event meets the criteria for an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

3·15晚会曝光商户"偷脸",上海两家企业被点名,市场监管部门已介入调查

2021-03-15
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment of AI systems (facial recognition cameras) that capture biometric data without consumer consent, violating legal frameworks such as the Personal Information Security Specification and the Civil Code. This unauthorized data collection harms individuals' privacy and security, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The involvement of regulatory authorities and the detailed description of the AI system's use and its consequences confirm this classification.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:已知晓,将尽快反馈

2021-03-15
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of AI systems (facial recognition cameras) that collect and process biometric data of customers without their knowledge or consent. This unauthorized data collection constitutes a violation of privacy rights and applicable legal protections, fulfilling the criteria for harm under human rights violations. The involvement of AI in the development and use of these facial recognition systems directly leads to this harm. Hence, the event is classified as an AI Incident.
Thumbnail Image

315晚会第一弹:直击违规抓取人脸信息,科勒、宝马等被点名

2021-03-15
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems with facial recognition capabilities used to capture and analyze biometric data. The unauthorized collection and storage of this sensitive personal information without consent directly breaches legal protections for personal data and privacy, constituting a violation of human rights and legal obligations. The harm is realized as individuals' biometric data is being collected and stored without their knowledge or consent, threatening their privacy and security. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and potential harm to individuals.
Thumbnail Image

3·15晚会曝光商户“偷脸”:上海两家企业被点名 市场监管部门已介入调查

2021-03-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
Facial recognition cameras are AI systems that process biometric data to identify or analyze individuals. The unauthorized collection of personal data without consent is a breach of legal protections for privacy and personal information, which falls under violations of human rights and applicable law. Since the AI system's use has directly led to this violation, this qualifies as an AI Incident.
Thumbnail Image

遭央视315晚会点名 苏州万店掌:没泄露隐私

2021-03-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of facial recognition AI systems to capture and record biometric data of customers without their informed consent, violating legal frameworks such as the Personal Information Security Specification and the Civil Code. The AI system's use directly leads to a breach of privacy rights, a form of harm to individuals' fundamental rights. The companies' acknowledgment of the issue and the regulatory context confirm the realized harm. Hence, this is an AI Incident involving violations of human rights and legal obligations related to personal data protection.
Thumbnail Image

3.15晚会曝光名单来了!

2021-03-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems such as facial recognition cameras and data processing algorithms used to collect and analyze biometric and personal data without consent, which is a violation of privacy rights and legal regulations. The misuse and unauthorized sale of personal data, including job applicant resumes, directly harm individuals' rights and privacy. The deceptive apps targeting elderly users through AI-driven profiling cause harm by misleading and exploiting a vulnerable group. These harms are realized and ongoing, meeting the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

315曝光人脸数据安全:涉事雅量科技“老板”投资广

2021-03-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as facial recognition cameras with advanced capabilities to identify and label individuals, which are used without consumer consent. This use directly leads to violations of privacy and potentially breaches fundamental rights, constituting harm to individuals. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and privacy.
Thumbnail Image

央视“3·15”开播 宝马4S店安装人脸识别系统被曝光

2021-03-15
财经网
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system explicitly mentioned. The use of this system for marketing and potentially unconsented data collection leading to price discrimination can be considered a violation of rights (privacy and consumer rights). Since the report exposes ongoing use and harm related to these AI systems, this qualifies as an AI Incident under the definition of violations of human rights or breach of obligations under applicable law protecting fundamental rights.
Thumbnail Image

科勒卫浴客服回应人脸识别摄像头:已知悉,将尽快反馈相关部门

2021-03-15
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The facial recognition cameras use AI to identify customers' faces, gender, age, and mood, which qualifies as an AI system. The unauthorized collection of biometric data without consent constitutes a violation of personal privacy rights and applicable laws protecting sensitive personal information. This directly leads to harm in terms of violation of human rights and legal obligations. Therefore, this event qualifies as an AI Incident due to the realized harm from the AI system's use.
Thumbnail Image

315曝光:喂药养羊,简历被贩卖、“瘦身”钢筋……

2021-03-15
杭州网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: facial recognition technology is used to capture and process biometric data without consent, and recruitment platforms use algorithmic systems to manage and distribute personal data. The misuse and unauthorized distribution of personal data directly violate individuals' rights and privacy, fitting the definition of harm under violations of human rights or breach of obligations under applicable law. Therefore, this qualifies as an AI Incident due to the realized harm caused by the development and use of AI systems in these contexts.
Thumbnail Image

门店装人脸识别摄像头遭3·15晚会曝光 科勒客服回应:“我们不会泄露个人信息”

2021-03-15
杭州网
Why's our monitor labelling this an incident or hazard?
Facial recognition cameras are AI systems that analyze biometric data to identify individuals. The covert collection and processing of such data without consent constitute a violation of privacy rights and potentially other legal protections related to personal data. The reported harm includes threats to users' privacy and property security, which aligns with violations of human rights and harm to individuals. Therefore, this event qualifies as an AI Incident due to the realized harm stemming from the use of AI systems in a manner that breaches privacy and legal obligations.
Thumbnail Image

央视3·15晚会曝光:科勒卫浴、宝马、MaxMara商店安装人脸识别摄像头

2021-03-15
杭州网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems with facial recognition capabilities used by multiple companies to capture and analyze customers' biometric data without their knowledge or consent. This unauthorized data collection breaches legal frameworks protecting personal information and constitutes a violation of human rights and privacy. The harm is realized, as customers' sensitive biometric data is being secretly collected and stored, which can lead to serious privacy and security risks. Therefore, this qualifies as an AI Incident due to violations of rights and harm to individuals' privacy and security caused by the AI system's use.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:已知晓,将尽快反馈

2021-03-15
华商网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI systems (facial recognition cameras) used to collect and process biometric data without informed consent, which is a breach of privacy and human rights. The harm is realized as the data collection has already occurred extensively (billions of face data points). The involvement of AI in the development and use of these systems directly leads to violations of fundamental rights. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

3·15晚会曝光人脸识别!涉事公司万店掌曾遭行政处罚

2021-03-15
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) that collects and processes personal biometric data without consent, leading to a breach of privacy rights. The unauthorized collection and sale of facial data directly violates human rights and legal protections for personal data. The article describes realized harm through the exposure of these practices and the administrative penalty imposed, indicating that the AI system's use has directly led to harm. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

315晚会惊曝:多家知名商店安装人脸识别摄像头 科勒卫浴、宝马、Max Mara均在列 海量人脸信息已被搜集

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used by multiple stores to collect biometric data without consent, violating personal information security regulations. The harm includes violations of privacy and potential threats to users' property and personal security, which are direct harms caused by the AI system's use. Therefore, this qualifies as an AI Incident due to the realized harm from unauthorized biometric data collection and privacy breaches.
Thumbnail Image

天眼查315线索:监控摄像 谁在盗我的脸

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through facial recognition technology used for identity verification, which is an AI system. The main issue is unauthorized collection of facial data without user consent, which is a violation of privacy and information security principles. However, the article does not report a specific AI Incident where harm has occurred (e.g., identity theft, misuse of data causing injury or rights violations) nor does it describe a specific AI Hazard event with plausible future harm. Instead, it provides background information on the risks and regulatory status of a company involved in such technology. This fits the definition of Complementary Information, as it supports understanding of AI privacy risks and governance but does not report a new incident or hazard.
Thumbnail Image

天眼查315线索:直击人脸识别,谁在"偷"我的脸?

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data to identify individuals. The report exposes misuse or unauthorized use of facial data, which is a violation of personal rights and privacy, fitting the definition of an AI Incident under violations of human rights or breach of applicable law. The administrative penalty and company details support the context but the core issue is the misuse of AI facial recognition leading to harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

天眼查315线索:直击人脸识别,谁在"偷"我的脸?

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The report reveals that companies have been secretly collecting people's facial data without their knowledge or consent, which is a violation of privacy and potentially human rights. This unauthorized use of AI systems has directly led to harm in terms of privacy violations and breaches of legal protections. Therefore, this event qualifies as an AI Incident due to violations of rights caused by the use of AI facial recognition technology without consent.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:已知晓,将尽快反馈

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data. The collection of facial data without proper consent or legal compliance can lead to violations of privacy rights, which falls under harm category (c) - violations of human rights or breach of legal obligations protecting fundamental rights. The exposure by the consumer rights program indicates the harm is realized or ongoing. Kohler's acknowledgment and intention to respond do not negate the incident but confirm awareness. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing or contributing to harm.
Thumbnail Image

央视3·15点名,非法人脸识别、浏览器虚假广告...等九大黑幕曝光

2021-03-15
网易科技
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems installed in stores without user consent, leading to unauthorized biometric data collection, which is a direct violation of privacy and personal information security. The illegal downloading and sale of personal resumes from AI-powered recruitment platforms represent a breach of labor and intellectual property rights. The mobile apps that profile elderly users and push deceptive ads exploit AI-driven user profiling and recommendation algorithms, causing harm to vulnerable groups. The presence of AI systems is clear in facial recognition, data scraping, and algorithmic advertisement placement. The harms described are realized and significant, including privacy violations, potential financial harm, and misinformation. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

万店掌违规抓取人脸信息遭央视315曝光,青岛海信为POS战略合作方,悠络客、雅量和瑞为亦被点名

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems, namely facial recognition technologies, used by companies such as 万店掌 and others to capture and record customers' facial data without their knowledge or consent. This constitutes a violation of personal information rights and legal frameworks protecting biometric data, which is a form of harm to human rights and privacy. The unauthorized use and storage of sensitive biometric data can lead to serious privacy breaches and potential financial harm, fulfilling the criteria for an AI Incident. Therefore, this event is classified as an AI Incident due to the realized harm caused by the misuse of AI facial recognition systems.
Thumbnail Image

直击315晚会:谁在偷我的"脸"?

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (facial recognition technology) to capture and process consumers' facial data without consent, which is a direct violation of personal privacy and legal protections. This misuse has already occurred and caused harm to individuals' rights, fitting the definition of an AI Incident under violations of human rights and breach of applicable law. The involvement of AI in the development and use of these facial recognition systems is clear, and the harm is realized through unauthorized data collection and potential misuse.
Thumbnail Image

蓝鲸315丨万店掌等人脸识别设备随意抓取人脸信息被点名,涉及性别、年龄等隐私与财产安全信息

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition devices) that have been used to collect sensitive biometric data without informed consent, violating privacy and personal data protection laws. This constitutes a violation of human rights and personal data rights, which fits the definition of an AI Incident. The harm is realized as the data was collected and used improperly, including blacklisting certain individuals, thus directly causing harm to individuals' rights and privacy.
Thumbnail Image

"3?15"第一枪点名科勒卫浴!这些企业"偷"走你人脸信息!

2021-03-15
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technology) being used by companies to collect biometric data without user consent, which is a direct violation of personal information protection laws and fundamental privacy rights. The harm is realized as individuals' facial data is collected secretly, posing risks to privacy and security. The involvement of AI in the development and use of these facial recognition systems is clear, and the resulting harm fits the definition of an AI Incident under violations of human rights and privacy. Hence, the event is classified as an AI Incident.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:已知晓

2021-03-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) in the collection of biometric data without proper consent, which is a violation of privacy and human rights. This constitutes a breach of obligations under applicable law intended to protect fundamental rights. The harm is realized as the illegal collection and possible sale of facial data has occurred, directly linked to the AI system's use. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations through the use of AI systems for unauthorized facial data collection.
Thumbnail Image

3·15晚会曝光!这些知名品牌被点名

2021-03-15
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of AI-based facial recognition systems by multiple brands to collect biometric data without user consent, which is a direct violation of privacy rights and legal standards. The unauthorized collection and processing of sensitive biometric information is a clear harm to individuals' rights and privacy, fulfilling the criteria for an AI Incident under the OECD framework. The AI system's use here directly leads to harm (violation of rights), not just a potential risk, so it is not merely a hazard or complementary information. Other parts of the article describe various consumer protection issues unrelated to AI systems, but the facial recognition misuse is clearly AI-related and harmful.
Thumbnail Image

315晚会第一弹:直击违规抓取人脸信息,科勒、宝马等被点名

2021-03-15
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used to capture and analyze biometric data. The use is unauthorized and non-consensual, violating legal frameworks protecting personal information, thus constituting a breach of fundamental rights. The harm is realized as individuals' sensitive biometric data is collected and stored without consent, threatening privacy and security. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in violation of rights and laws.
Thumbnail Image

315晚会第一弹:直击人脸识别,谁在“偷”我的脸?

2021-03-15
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (facial recognition cameras with AI capabilities) used to capture biometric data. The use is without informed consent, violating legal requirements and personal privacy rights, which is a direct harm to individuals' rights. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law protecting fundamental rights (privacy and data protection).
Thumbnail Image

3・15晚会曝光科勒卫浴、宝马、MaxMara商店安装人脸识别摄像头

2021-03-15
中国经济网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used in commercial settings to capture and process biometric data without customer knowledge or consent, violating legal requirements and personal privacy rights. The harm is realized and ongoing, as customers' sensitive biometric data is being collected and stored covertly, posing risks to privacy and security. This fits the definition of an AI Incident because the AI system's use directly leads to violations of human rights and breaches of legal obligations protecting personal data. The scale and nature of the unauthorized data collection confirm significant harm, not just a potential hazard or complementary information.
Thumbnail Image

315晚会曝光人脸信息被滥用:海量人脸信息已被搜集

2021-03-15
天极网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology (an AI system) to collect personal biometric data without proper consent, which is a direct violation of privacy rights and legal frameworks protecting personal information. The harm is realized as consumers' facial data has been collected and potentially misused without authorization, constituting an AI Incident under the framework's definition of violations of human rights and legal obligations.
Thumbnail Image

央视3•15点名,九大黑幕曝光

2021-03-15
证券时报网_证券时报旗下资讯平台_股票_基金_期货_债券_理财_财经_行情_数据_股吧_博客_论坛
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (facial recognition, recruitment platform algorithms, mobile app profiling, search engine ad algorithms) whose development and use have directly led to harms including privacy violations, unauthorized data collection, personal data breaches, and consumer deception. The harms affect individuals' rights, privacy, and safety, fulfilling the definition of AI Incidents. The involvement of AI is clear from the use of facial recognition technology, algorithmic data processing for recruitment and profiling, and AI-driven ad placement. The harms are materialized, not hypothetical, and thus this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

3•15曝光名单来了!智联、前程无忧售卖简历!

2021-03-15
证券时报网_证券时报旗下资讯平台_股票_基金_期货_债券_理财_财经_行情_数据_股吧_博客_论坛
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as facial recognition cameras and recruitment platforms that process personal data. The unauthorized collection and sale of biometric and personal resume data have directly led to privacy violations and facilitated fraud, which are harms to individuals and breaches of legal protections. The AI systems' development and use have directly contributed to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

央视315曝光科勒门店人脸识别摄像头 谁在"偷"我的脸?

2021-03-15
internet.cnmo.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (facial recognition cameras) that capture and process personal biometric data without consent, which is a violation of privacy rights and applicable laws protecting personal data. This constitutes a breach of obligations under applicable law intended to protect fundamental rights. The harm is realized as the data is being secretly collected and sold, directly linked to the AI system's use. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

3·15晚会曝光丨科勒卫浴、宝马、MaxMara商店安装人脸识别摄像头,海量人脸信息已被搜集!

2021-03-15
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems with facial recognition capabilities used to capture and process biometric data without customers' knowledge or consent. The unauthorized collection and storage of sensitive biometric information constitute a violation of personal privacy rights and legal frameworks such as the Personal Information Security Specification and the Civil Code. The AI system's use directly leads to harm in the form of privacy violations and potential threats to individuals' property and security. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to a breach of fundamental rights and legal obligations.
Thumbnail Image

315曝光9弹!涉及瘦肉精羊、英菲尼迪、智联招聘等

2021-03-15
青岛新闻
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the facial recognition cameras and recruitment platforms using AI for data processing and recommendation. The harms include violations of privacy rights, unauthorized data collection, and facilitating fraud, which are direct harms to individuals and communities. The phone cleaning apps' covert data collection and misleading behavior also constitute harm. These meet the criteria for AI Incidents as the AI systems' use or misuse directly leads to harm. Other reported issues do not involve AI systems or AI-related harm and are thus not classified as AI Incidents.
Thumbnail Image

经济观察网 记者 胡艳明 在毫不知情的情况下,我们的人脸已经被企业掌握并“编号”,进而进行精准营销。

2021-03-15
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used by companies to collect and process biometric data without informing or obtaining consent from individuals. This unauthorized data collection and use for precise marketing directly violates personal privacy rights and legal standards, constituting harm under the framework's category of violations of human rights and legal obligations. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

聚焦“3·15” | 哪些公司被点名?央视3·15晚会... 财经圈 5898 1小时前

2021-03-15
thehour.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems by multiple companies to collect biometric data without user consent, which is a violation of personal information security and privacy rights. The recruitment platforms use AI-based data processing to manage and provide access to personal resumes, and the leakage and sale of this data represent a breach of privacy and labor rights. These harms are directly linked to the development and use of AI systems, fulfilling the criteria for AI Incidents. Other reported issues do not involve AI systems and thus do not qualify as AI Incidents or Hazards. The presence of AI systems and the direct harm caused by their misuse or unauthorized use justify classification as AI Incidents.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:将尽快反馈

2021-03-15
大洋网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition cameras) used to collect and process biometric data covertly, which is a clear AI system involvement. The use of these systems has directly led to violations of privacy rights and possibly other human rights, fulfilling the criteria for an AI Incident. The company's acknowledgment and promise to respond do not negate the fact that harm has already occurred. Therefore, this event qualifies as an AI Incident due to the realized harm from the AI system's use.
Thumbnail Image

"3·15"晚会:科勒门店私装人脸识别功能摄像头 搜集人脸信息

2021-03-15
Baidu.com
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data to identify individuals. The covert collection of facial data without consent constitutes a violation of human rights and legal protections related to privacy and data protection. The event reports actual use and data capture, indicating realized harm rather than potential harm. Therefore, this qualifies as an AI Incident due to violations of rights and privacy caused by the AI system's use.
Thumbnail Image

3·15|正通汽车100多家4S店“偷”脸被央视3·15晚会点名

2021-03-15
cb.com.cn
Why's our monitor labelling this an incident or hazard?
The use of facial recognition technology is an AI system application. The unauthorized and non-consensual collection of biometric data directly violates human rights and legal protections related to privacy. The event describes actual misuse of AI technology leading to harm (privacy violation), thus qualifying as an AI Incident under the framework.
Thumbnail Image

今年3.15那些事!

2021-03-15
chinatimes.net.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as facial recognition technology and recruitment platforms that process personal data. The unauthorized collection and sale of biometric and personal information directly harm individuals' privacy rights, constituting violations of fundamental rights protected by law. The involvement of AI in these harms is clear, as the facial recognition systems and data-driven recruitment platforms rely on AI for data processing and decision-making. The harms are realized, not hypothetical, as personal data has been collected and sold without consent. Thus, the event meets the criteria for an AI Incident. Other issues mentioned (food safety, false advertising) involve AI only indirectly or not at all, so the classification focuses on the AI-related privacy violations.
Thumbnail Image

315晚会惊曝:多家知名商店安装人脸识别摄像头

2021-03-15
网易
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition cameras) used in retail environments to collect biometric data without consent, violating legal norms and personal privacy rights. This unauthorized use of AI technology directly harms individuals by infringing on their privacy and potentially threatening their property and personal security. The involvement of AI in the development and use stages, combined with the realized harm (privacy violations and legal breaches), clearly classifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315曝光:喂药养羊,简历被贩卖、"瘦身"钢筋......

2021-03-15
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition systems being deployed without consent, leading to unauthorized data collection and privacy violations, which constitute a breach of fundamental rights. The profiling of elderly users through data collected by mobile apps for targeted deceptive advertising also involves AI-driven user profiling causing harm. These harms have already occurred, making this an AI Incident. Other issues like the illegal sale of resumes and deceptive advertising involve data misuse but are not clearly linked to AI systems. The presence and misuse of AI in facial recognition and profiling justify classification as an AI Incident.
Thumbnail Image

科勒卫浴回应门店装监控收集人脸数据:已知晓,将尽快反馈

2021-03-15
网易新闻中心
Why's our monitor labelling this an incident or hazard?
The article describes the deployment of AI systems (facial recognition technology) that collect and process biometric data without informed consent, which is a violation of privacy rights and potentially other legal protections. The involvement of AI in unauthorized data collection and tagging of individuals (e.g., labeling as 'professional troublemakers' or 'journalists') indicates misuse of AI technology leading to harm. The company's acknowledgment of the issue following media exposure confirms the incident's materialization. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy through the use of AI systems.
Thumbnail Image

315晚会曝光:科勒收集人脸数据,能分析顾客到店的心情

2021-03-15
网易科技
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (facial recognition technology) used in ways that have directly led to violations of human rights, specifically privacy and data protection rights, which are protected under applicable laws. The unauthorized collection and processing of biometric data without consent constitute a breach of legal obligations and cause harm to individuals and communities by infringing on privacy and potentially leading to financial and social harm. The detailed reporting of ongoing harm, legal scrutiny, and regulatory actions confirm this as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

315晚会"上榜"互联网企业名单曝光

2021-03-15
网易科技
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems such as facial recognition cameras and data processing algorithms used by internet companies. The AI systems' use has directly led to harms including unauthorized biometric data collection, privacy violations, and enabling fraudulent activities through leaked personal data. The profiling and targeting of elderly users with misleading ads via AI-powered apps also constitutes harm. These harms align with violations of human rights and harm to communities as defined. The event details realized harms, not just potential risks, and involves multiple companies and AI applications, confirming it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

科勒卫浴违规抓取客户人脸信息被央视315晚会曝光

2021-03-15
3news.cn
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system that processes biometric data. The unauthorized collection and use of customers' facial data without consent directly violates privacy rights and legal frameworks protecting personal information. The event describes realized harm through illegal data collection practices, which fits the definition of an AI Incident under violations of human rights and applicable law. Hence, the classification is AI Incident.
Thumbnail Image

四家AI企业被央视315点名 大型"偷脸"事件曝光 - cnBeta.COM 移动版

2021-03-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technologies) used by several companies to collect and process biometric data without informing or obtaining consent from individuals. This unauthorized use of AI systems has directly led to violations of privacy rights and poses risks of harm to individuals if the data is leaked or misused. The harms include potential financial loss and threats to life safety, which fall under the definitions of AI Incident. The event involves the use and misuse of AI systems, not just potential harm, so it is not merely a hazard. It is not complementary information because the main focus is on the exposure of harmful practices, not on responses or updates. Hence, the classification is AI Incident.
Thumbnail Image

央视“3·15”晚会开播 “被扒”企业涉及人脸识别、招聘平台、瘦肉精等

2021-03-15
华商网
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI systems such as facial recognition technology used without consent to collect biometric data, which is a direct violation of privacy and human rights. The recruitment platforms' AI-driven data management systems allowed mass unauthorized access and sale of personal data, leading to privacy breaches. The mobile cleaning apps use automated data collection and profiling techniques to exploit elderly users, causing harm to a vulnerable group. These harms are realized and directly linked to the development and use of AI systems. The event details multiple instances of AI misuse causing significant harm, meeting the criteria for an AI Incident under violations of human rights and harm to communities. Other issues reported (e.g., food safety, car defects) do not involve AI systems and are outside the AI harm framework. Hence, the classification is AI Incident.
Thumbnail Image

315曝光9弹!涉及UC浏览器、英菲尼迪、智联招聘等

2021-03-15
网易
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as facial recognition cameras and recruitment platforms with AI capabilities. The harms include unauthorized collection and use of biometric data, illegal distribution of personal data, and dissemination of false medical ads, all of which are direct violations of privacy, rights, and health. The phone cleaning apps' AI profiling leads to scams targeting vulnerable elderly users, a clear harm. These meet the criteria for AI Incidents as the AI systems' use directly or indirectly causes harm. Other reported issues (car defects, food safety, steel quality, watch repair scams) do not involve AI and are unrelated. Hence, the classification is AI Incident.
Thumbnail Image

央视315晚会:科勒卫浴等门店安摄像头收集人脸数据被曝... 2021-03-15 21:15

2021-03-15
sznews.com
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment and use of AI-powered facial recognition systems in retail environments to collect and analyze personal biometric data without informed consent. This use directly leads to violations of privacy and potentially other human rights, fulfilling the criteria for an AI Incident under the category of violations of human rights or breach of legal protections. The AI system's role is pivotal as it enables the covert data collection and profiling. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
Yahoo News
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data to identify individuals. The companies' use of this technology without obtaining consent constitutes a breach of privacy rights, a violation of applicable law protecting fundamental rights. The event describes realized harm in terms of privacy violations and legal non-compliance, with Kohler acknowledging the issue and ceasing use. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through privacy violations.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
AP NEWS
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data to identify individuals. The companies' use of this AI system without obtaining proper consent constitutes a breach of privacy rights, a violation of applicable law protecting fundamental rights. This has directly led to harm in terms of privacy violations and potential threats to property security, fulfilling the criteria for an AI Incident. The event describes realized harm due to the AI system's use, not just potential harm, and includes a response from one company acknowledging the issue.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition | The China Post, Taiwan

2021-03-16
The China Post, Taiwan
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here to identify customers. The companies' use of this technology without obtaining consent violates privacy laws, constituting a breach of legal obligations protecting fundamental rights. The event describes realized harm in terms of privacy violations and legal non-compliance, thus qualifying as an AI Incident under the framework.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
San Francisco Gate
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here to identify and track individuals without their consent, violating privacy rights protected by law. The event describes actual use of AI leading to a breach of legal obligations and fundamental rights, which fits the definition of an AI Incident. The harm is realized, not just potential, as companies tracked customers without informing them, and the legal framework requires consent. Kohler's apology and cessation of use further confirm the incident's recognition and impact.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
The Hindu
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here to identify and track customers without their consent, violating privacy laws that protect sensitive personal information. The event reports actual use of AI technology causing harm through privacy breaches, which is a violation of fundamental rights under applicable law. The companies' use of AI facial recognition without consent and the resulting public criticism and apology confirm the harm has occurred. Hence, this is an AI Incident involving AI system use leading to violations of human rights and legal obligations.
Thumbnail Image

Consumer gala exposes privacy violators

2021-03-16
China Daily
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used to capture and analyze personal data such as age, gender, and mood. The unauthorized collection and use of this data without consent directly violates privacy rights and applicable laws, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The harm is realized as personal privacy is compromised and legal breaches have occurred, with companies admitting to or being exposed for these violations.
Thumbnail Image

Kohler, Ford, Infiniti in spotlight on China consumer rights show

2021-03-16
Financial Post
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from the description of cameras scanning faces and analyzing customer mood, which involves AI-based facial recognition and emotion detection. The use of these AI systems for customer surveillance and targeted sales constitutes a use of AI that has led to violations of privacy rights, a form of human rights violation. The event describes realized harm through illegal or unethical data collection practices, meeting the criteria for an AI Incident. The company's rectification efforts do not negate the fact that harm occurred.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition | WTOP

2021-03-16
WTOP
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here to identify and track customers. The companies' use of this technology without obtaining consent violates privacy laws that protect sensitive personal information, constituting a breach of legal obligations and fundamental rights. This breach has directly led to harm in terms of privacy violations. Kohler's apology and decision to stop using the technology further confirm the recognition of harm. Hence, this is an AI Incident involving violations of human rights and legal protections due to AI misuse.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition | Times Leader

2021-03-16
Times Leader
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The companies' use of this AI system without obtaining legally required consent constitutes a breach of privacy rights and legal obligations, which fits the definition of an AI Incident under violations of human rights or breach of applicable law. Although no direct harm like data leaks or injury is reported, the violation of privacy laws and the potential threat to security are sufficient to classify this as an AI Incident. The event involves the use of AI systems leading to a breach of obligations intended to protect fundamental rights, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China's facial recognition paradox

2021-03-18
Protocol
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as facial recognition technologies deployed in retail and public spaces, collecting personal biometric data without consent. This unauthorized data collection constitutes a violation of privacy rights, a breach of fundamental rights protected under applicable law. The article documents realized harm through invasive surveillance and privacy breaches, meeting the criteria for an AI Incident. Although the government also uses such technologies, the focus here is on the private sector's misuse and the resulting harm. The article also discusses regulatory responses, but these are secondary to the primary incident of harm caused by AI system misuse.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here to identify customers. The companies' use of this technology without proper consent violates privacy laws, which are legal frameworks protecting fundamental rights. This constitutes a breach of obligations under applicable law, fulfilling the criteria for harm under definition (c). The event reports actual use and legal criticism, indicating realized harm rather than potential harm. Therefore, this is an AI Incident due to the direct involvement of AI in causing a violation of rights through unauthorized data collection and surveillance.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
KOB 4
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system, and its use for customer flow measurement involves processing sensitive personal data. The article discusses privacy concerns and legal considerations, but no actual harm or violation has been reported. Kohler's cessation of use and apology indicate mitigation of potential issues. Therefore, this is not an AI Incident or AI Hazard but rather complementary information about societal and governance responses to AI use and privacy concerns.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
New Haven Register
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here to monitor visitors. The companies' failure to obtain consent as required by law directly breaches privacy rights, a fundamental human right. This constitutes a violation of obligations under applicable law protecting personal data and privacy, fitting the definition of an AI Incident under category (c). The harm is realized as it involves unauthorized collection of sensitive biometric data. Kohler's response confirms the issue's seriousness. Hence, the event is classified as an AI Incident.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
New Haven Register
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here for monitoring and data collection. The companies' failure to obtain consent as required by new legal regulations directly breaches privacy rights, a fundamental human right. This constitutes a violation of legal obligations protecting personal data, thus meeting the criteria for an AI Incident under violations of human rights or breach of applicable law. The harm is realized as it involves unauthorized collection and processing of sensitive biometric data, which threatens privacy and security.
Thumbnail Image

City shops using face recognition cameras

2021-03-15
SHINE
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment and use of AI-based face recognition systems that capture and analyze sensitive personal information without consumer consent, which is a direct violation of privacy rights and legal regulations. The AI system's role is pivotal as it enables real-time identification, tracking, and profiling of individuals, including blacklisting certain groups. This constitutes an AI Incident because the AI system's use has directly led to violations of fundamental rights and legal obligations, fulfilling the criteria for harm under the framework.
Thumbnail Image

Firms in facial recognition row

2021-03-16
archive.shine.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment and use of facial recognition AI systems to capture and record sensitive personal data such as facial images, gender, age, nationality, and mood without consumers' authorization or permission, which is against Chinese law. This unauthorized data collection constitutes a breach of privacy rights and legal obligations, directly causing harm to individuals' rights. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in causing violations of fundamental rights and legal protections.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-17
Yakima Herald-Republic
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The companies' use of this technology without obtaining legally required permission directly breaches privacy rights, a form of human rights violation. The article reports that this use has already occurred, constituting realized harm. The involvement of AI in causing this harm is explicit and central to the event. Hence, the event meets the criteria for an AI Incident under violations of human rights or applicable law protecting fundamental rights.
Thumbnail Image

China state TV raps Kohler, BMW for using facial recognition

2021-03-16
WREX
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here to identify and track individuals without their consent, which constitutes a violation of privacy rights. The criticism by Chinese state TV highlights that this use is against privacy rules, implying harm to individuals' rights. Since the AI system's use has directly led to a breach of privacy obligations, this qualifies as an AI Incident under the framework's definition of violations of human rights or breach of legal protections.
Thumbnail Image

BC-AS--China-Facial Reco, 0333

2021-03-16
nampa.org
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) in customer identification. The criticism centers on a possible violation of privacy rules, which relates to a breach of obligations under applicable law intended to protect fundamental rights. Although the article does not confirm harm has occurred, the accusation implies that the use of AI may have already led to privacy violations. Therefore, this qualifies as an AI Incident due to the direct or indirect breach of privacy rights caused by the AI system's use.
Thumbnail Image

AP - BC-AS--China-Facial Reco, 0333

2021-03-16
nampa.org
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition, which is an AI system, to identify customers. The criticism by state TV highlights concerns about privacy violations, which relate to breaches of legal obligations intended to protect fundamental rights. Since the article indicates a possible violation of privacy rules due to the use of AI facial recognition, this constitutes an AI Incident involving violations of rights under applicable law.
Thumbnail Image

《新聞1+1》 20210317 人臉識別,不能再"帶病"發展!

2021-03-18
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The report exposes incidents of misuse and unauthorized data collection, which can be considered violations of fundamental rights and privacy. Since these abuses have already occurred and caused harm, this qualifies as an AI Incident under the category of violations of human rights or breach of legal protections.
Thumbnail Image

央視3.15踢爆|不同行業均安裝人臉識別攝像頭 違規紀錄顧客資料

2021-03-16
香港01
Why's our monitor labelling this an incident or hazard?
The article describes the deployment and use of facial recognition AI systems that capture and record biometric data of customers without their informed consent, which breaches legal requirements for personal data protection. This unauthorized data collection and profiling constitute violations of fundamental rights and legal obligations, fulfilling the criteria for an AI Incident. The company's subsequent apology and remediation efforts do not negate the fact that harm occurred through unlawful data processing.
Thumbnail Image

央視點名他國商家違法存取個資 評論齊指中國「賊喊抓賊」 - 國際 - 自由時報電子報

2021-03-19
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI-based facial recognition systems by companies and government entities, leading to large-scale unauthorized collection and misuse of personal biometric data, which constitutes a violation of privacy rights (a human rights violation). The harm is realized and ongoing, as the technology is widely deployed and abused, causing direct harm to individuals' rights. The Chinese government's own use of these AI systems for surveillance and social control further exacerbates the harm. The accusations by Chinese media against foreign companies are framed as a diversion from these harms. Given the direct involvement of AI systems in causing privacy violations, this event meets the criteria for an AI Incident.
Thumbnail Image

央視315打假 曝河北瘦肉精羊等9大案例 | 聯合新聞網:最懂你的新聞網站

2021-03-16
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems in multiple retail locations to collect biometric data without consent, which is a direct violation of privacy rights (a breach of obligations under applicable law protecting fundamental rights). The misuse of recruitment platform data, likely managed or processed by AI systems, further supports the presence of AI-related harm. These harms are realized and ongoing, not merely potential. The sheep farming issue, while serious, does not involve AI and thus does not affect the AI classification. Given the direct involvement of AI systems in causing violations of rights, this event qualifies as an AI Incident.
Thumbnail Image

中國多家知名商店安裝人臉辨識 違法存取個資 | 兩岸焦點 | 兩岸 | 經濟日報

2021-03-17
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used in retail stores to collect biometric data without consent, which is a direct violation of privacy laws and fundamental rights. The unauthorized storage of over a hundred million facial images constitutes a significant breach of obligations under applicable law protecting personal and intellectual property rights. The harm is realized as it threatens individuals' privacy and financial security. The official investigation and remedial actions further confirm the incident's seriousness. Therefore, this qualifies as an AI Incident due to direct harm and legal violations caused by the AI system's use.
Thumbnail Image

315亂象|招聘平台網售求職者CV Max Mara、寶馬門市濫裝人臉識別鏡頭 | 蘋果日報

2021-03-15
Apple Daily 蘋果日報
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems (facial recognition cameras and data processing algorithms) to collect and sell sensitive personal data without consent, leading to violations of privacy and personal rights. The unauthorized sale of CVs and biometric data has already caused harm by enabling fraud and privacy breaches. The facial recognition systems' covert operation and the large-scale accumulation of biometric data without consent represent a clear breach of legal and human rights protections. These harms fall under violations of human rights and breaches of applicable law protecting fundamental rights, qualifying this as an AI Incident.
Thumbnail Image

Max Mara內地店被揭暗裝人臉識別鏡頭 客人被「奪面」 - 香港經濟日報 - 中國頻道 - 社會熱點

2021-03-17
香港經濟日報 hket.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI facial recognition systems that have been secretly installed and used to collect and store personal biometric data without consent, violating legal and privacy rights. This constitutes a breach of obligations under applicable law protecting fundamental rights, specifically privacy and data protection. The harm is direct and realized, as customers' facial data were collected and stored unlawfully, which can lead to further risks such as identity theft or unauthorized surveillance. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

央視指出他國商家違法存取個資 美退役軍官:做賊喊抓賊

2021-03-19
HiNet
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of facial recognition technology used to collect and analyze biometric data. The alleged illegal collection of facial data without consent constitutes a violation of privacy rights, which falls under violations of human rights or breaches of applicable law protecting fundamental rights. Since the article reports on ongoing or past unauthorized data collection and surveillance practices, this constitutes realized harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing violations of rights through misuse or unlawful data collection.
Thumbnail Image

國內多間名店涉裝人臉識別系統 未經同意收集顧客資料 - ezone.hk - 科技焦點 - 科技汽車

2021-03-18
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems to collect and store customers' facial biometric data without their knowledge or consent. Facial recognition is an AI system as it infers from input images to generate outputs such as identification and tracking. The unauthorized data collection constitutes a breach of privacy and personal data protection laws, which are fundamental rights. The harm is realized as customers' biometric data is collected and stored without consent, posing risks to their privacy and security. Therefore, this event meets the criteria for an AI Incident due to violations of human rights and privacy caused by the AI system's use.
Thumbnail Image

中國多家知名商店安裝人臉辨識 違法存取個資 | 兩岸 | 中央社 CNA

2021-03-17
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (facial recognition technology) used in stores for surveillance and data collection. The illegal and unauthorized storage of biometric data constitutes a violation of personal rights and applicable laws protecting privacy, which is a breach of obligations under applicable law intended to protect fundamental rights. The harm has already occurred as the data was collected and stored unlawfully, and official investigations and remedial actions followed. Therefore, this qualifies as an AI Incident due to realized harm linked to the use of AI systems.
Thumbnail Image

科勒衛浴被曝違規抓取顧客人臉信息 回應:連夜拆除設備

2021-03-16
hkcd.com
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment and use of AI facial recognition systems to collect biometric data without customer consent, violating privacy rights and legal requirements. This constitutes a breach of fundamental rights and legal obligations, fitting the definition of an AI Incident under violations of human rights or breach of applicable law. The harm has already occurred through unauthorized data collection, and the company's remedial actions are a response to this incident rather than a prevention of future harm.
Thumbnail Image

該讓“偷臉”的企業“丟丟臉”了

2021-03-17
big5.news.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system—facial recognition technology—that directly leads to violations of individuals' rights and legal protections, fulfilling the criteria for an AI Incident. The unauthorized collection and use of facial data harm consumers' privacy and legal rights, which are fundamental human rights. The article reports on actual harms occurring due to these practices, not just potential risks or general commentary, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

央視指出他國商家違法存取個資 美退役軍官:做賊喊抓賊 | 中國 | 新頭殼 Newtalk

2021-03-19
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI facial recognition systems to collect and analyze biometric data without consent, which directly implicates violations of privacy rights, a form of human rights violation. The involvement of AI systems in unauthorized data collection and surveillance meets the criteria for an AI Incident because it has directly led to breaches of fundamental rights. The article discusses realized harm through illegal data collection practices, not just potential risks, and thus it is not merely a hazard or complementary information. Therefore, the classification as AI Incident is appropriate.