AI Recommendation Algorithms Cause Addiction, Content Quality Issues, and Discriminatory Pricing on Chinese Internet Platforms

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese internet platforms' AI-driven recommendation algorithms have led to user addiction, especially among youth, the spread of low-quality or harmful content, privacy violations, and discriminatory pricing practices ('big data price discrimination'). These harms have prompted regulatory scrutiny and calls for stronger oversight and responsible algorithm design.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves AI systems, specifically algorithmic recommendation systems that use AI techniques to analyze user data and deliver personalized content. It documents realized harms including harm to communities (e.g., addiction, misinformation, low-quality content), violations of rights (privacy concerns), and economic harm (unfair pricing). These harms are directly linked to the use and design of AI systems. Therefore, this event qualifies as an AI Incident. The article also discusses governance and regulatory responses, but the primary focus is on the harms caused by AI systems in use, not just complementary information or potential hazards.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersChildrenGeneral public

Harm types
PsychologicalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisementSales

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

算法推荐不应跑偏变味

2020-11-18
新华网
Why's our monitor labelling this an incident or hazard?
The article focuses on the broad societal issues and potential harms related to the use of AI-based algorithmic recommendation systems but does not report a concrete AI Incident or AI Hazard. It highlights concerns and calls for regulatory and social responses, which aligns with the definition of Complementary Information. There is no description of a specific AI system malfunction, misuse, or realized harm event, nor a direct or plausible imminent harm scenario. Therefore, the classification is Complementary Information.
Thumbnail Image

我们需要什么样的"算法"?

2020-11-15
新华网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically algorithmic recommendation systems that use AI techniques to analyze user data and deliver personalized content. It documents realized harms including harm to communities (e.g., addiction, misinformation, low-quality content), violations of rights (privacy concerns), and economic harm (unfair pricing). These harms are directly linked to the use and design of AI systems. Therefore, this event qualifies as an AI Incident. The article also discusses governance and regulatory responses, but the primary focus is on the harms caused by AI systems in use, not just complementary information or potential hazards.
Thumbnail Image

让“算法”给文化生活带来正能量

2020-11-16
新华网
Why's our monitor labelling this an incident or hazard?
The text focuses on the general effects and challenges of algorithmic recommendation technologies in cultural and information dissemination contexts. It does not report a particular event where an AI system caused harm or a near-miss incident. Nor does it describe a specific AI hazard with a plausible risk of harm. Instead, it offers a reflective and normative discussion on the societal implications and the need for responsible AI use and regulation. Therefore, it fits best as Complementary Information, providing context and insight into AI's role in culture and society without reporting a concrete incident or hazard.
Thumbnail Image

我们需要什么样的“算法”?

2020-11-16
新华网
Why's our monitor labelling this an incident or hazard?
The article clearly describes AI systems (algorithmic recommendation systems) that analyze user data to deliver personalized content and pricing. It documents direct harms caused by these systems, such as harm to communities (addiction, misinformation), violations of consumer rights (discriminatory pricing), and privacy concerns. These harms have materialized and are linked to the use of AI systems. The article also discusses governance and regulatory responses, but the primary focus is on the harms caused by AI system use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

2020-11-17
新华网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of algorithmic recommendation engines that influence information exposure and user behavior. The harms described include harm to communities (spread of misinformation, emotional contagion), harm to minors (addiction), and violations of rights (potential unfair commercial practices). These harms are occurring as described, making this an AI Incident. The article also discusses regulatory responses, but the primary focus is on the harms caused by the AI systems' use.
Thumbnail Image

2020-11-15
人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithmic recommendation systems) in use and the harms they have caused, including user addiction, dissemination of low-quality or harmful content, privacy violations, and unfair pricing practices. These harms correspond to violations of rights and harm to communities. The article also discusses regulatory responses, but the primary focus is on the realized harms caused by AI system use. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

¾«×¼ÍÆËÍ¡¢´óÊý¾ÝɱÊì¡­¡­ÎÒÃÇÐèҪʲôÑùµÄ¡°Ëã·¨¡±

2020-11-16
人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithmic recommendation systems) that analyze user data to push personalized content and pricing. It documents actual harms such as youth addiction to low-quality content, spread of misleading or low-value information, and discriminatory pricing practices harming consumers. These constitute harm to communities, violations of consumer rights, and privacy breaches, all fitting the AI Incident definition. The article also discusses regulatory efforts and platform responses, but the primary focus is on the harms caused by AI system use, not just complementary information or potential hazards. Hence, classification as AI Incident is appropriate.
Thumbnail Image

算法推荐不应跑偏变味

2020-11-17
华龙网
Why's our monitor labelling this an incident or hazard?
The article explicitly refers to algorithmic recommendation systems (AI systems) and their use leading to various harms, including harm to communities (information bubbles, emotional contagion), harm to youth (addiction), and potential violations of rights (discriminatory pricing). These harms are occurring or have occurred due to the use of AI systems. Therefore, this qualifies as an AI Incident. The article also discusses regulatory and societal responses, but the primary focus is on the harms caused by AI system use, not just responses or future risks.
Thumbnail Image

我们需要什么样的“算法”?互联网平台越来越“懂”用户了吗?

2020-11-16
finance.3news.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the use and impact of AI-based recommendation algorithms on internet platforms, describing realized harms like user addiction, privacy invasion, and discriminatory pricing, which are indirect harms caused by AI system use. However, it does not describe a particular event or incident where these harms have directly led to a specific, concrete harm or legal violation in a discrete case. Instead, it provides a broad overview of the challenges, societal concerns, and regulatory responses related to AI recommendation systems. Therefore, it fits best as Complementary Information, providing context and updates on AI ecosystem impacts and governance rather than reporting a discrete AI Incident or AI Hazard.
Thumbnail Image

算法有温度,技术更暖心

2020-11-17
大洋网
Why's our monitor labelling this an incident or hazard?
The article centers on the broader societal and ethical issues related to AI recommendation algorithms, including potential harms and regulatory responses, but does not describe a particular AI Incident or AI Hazard. It is primarily a reflective opinion piece advocating for responsible AI use and regulation, without detailing a specific harmful event or a credible imminent risk. Therefore, it fits best as Complementary Information, providing context and insight into AI's impact and governance rather than reporting a new incident or hazard.