Douyin's AI Algorithms Enable Nighttime Exploitative Content and User Manipulation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Douyin's AI-driven content recommendation and live streaming algorithms have enabled the proliferation of sexually suggestive and exploitative live streams during late-night hours, exposing users to inappropriate content and manipulative practices aimed at extracting money, despite stricter moderation during the day. This has led to community harm and user exploitation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Douyin's content recommendation and live streaming platform algorithms) whose use has directly led to harms such as exposure to inappropriate and exploitative content, manipulation of users to spend money, and degradation of community standards. These harms fall under harm to communities and possibly violations of rights. The article describes these harms as ongoing and realized, not merely potential. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityRespect of human rightsDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

2022抖音集团企业社会责任报告:全年处罚涉及网暴账号14000多个

2023-03-16
chinaz.com
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems (e.g., anti-fraud models, content moderation algorithms) in the operation of Douyin's platform to combat misinformation, cyberbullying, and fraud. These AI systems are actively used to reduce harm, but the article does not report a new AI Incident (harm caused by AI) or an AI Hazard (plausible future harm). Instead, it reports on the effectiveness and scale of AI-driven interventions and platform governance measures, which fits the definition of Complementary Information. There is no indication that the AI systems caused harm or malfunctioned; rather, they are part of harm mitigation efforts.
Thumbnail Image

國安部門呼籲全面禁止抖音 | 聯合新聞網

2023-03-18
UDN
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks posed by TikTok, an AI-driven social media platform, particularly regarding misinformation and national security. However, it does not report any realized harm or incident caused by AI systems, only the possibility and political debate about banning the app. Therefore, it fits the definition of an AI Hazard, as the development and use of the AI system (TikTok) could plausibly lead to harms such as misinformation and social disruption, but no actual harm event is described. It is not Complementary Information because it is not updating or responding to a past incident but discussing a potential future risk and policy response. It is not unrelated because it involves AI systems and their societal impact.
Thumbnail Image

凌晨三点的抖音,你见过吗?各种藏污纳垢,堪比升级版本的陌陌!

2023-03-18
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Douyin's content recommendation and live streaming platform algorithms) whose use has directly led to harms such as exposure to inappropriate and exploitative content, manipulation of users to spend money, and degradation of community standards. These harms fall under harm to communities and possibly violations of rights. The article describes these harms as ongoing and realized, not merely potential. Therefore, this is classified as an AI Incident.