TikTok's AI Algorithm Criticized for Promoting Harmful Content and Censorship

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok's AI-driven recommendation and moderation systems have been criticized for promoting content that glamorizes eating disorders and for allegedly suppressing posts from minority creators. These issues have led to harm to users' mental health and violations of rights, highlighting the negative impact of TikTok's AI algorithms on its community.[AI generated]

Why's our monitor labelling this an incident or hazard?

TikTok's AI-powered recommendation algorithm is explicitly involved in the event, as it suggests content that can trigger or worsen eating disorders. The harm is to users' mental health, a form of injury or harm to health, which has materialized as described by users and campaigners. The AI system's use (content recommendation and moderation) has indirectly led to this harm by promoting or failing to adequately remove harmful content. This fits the definition of an AI Incident because the AI system's development and use have directly or indirectly caused harm to people's health. The article does not merely warn of potential harm but reports ongoing harm experienced by users, thus it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the issue.[AI generated]
AI principles
FairnessHuman wellbeingRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Organisation/recommendersRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

Young users may soon need their parents to help them get on TikTok

2020-06-23
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event involves AI only indirectly, as TikTok uses AI systems for content recommendation and moderation, but the article focuses on policy changes and parental controls rather than a specific AI system malfunction or harm caused by AI. There is no direct or indirect harm caused by AI systems described here, nor a plausible future harm from AI misuse. Instead, this is a societal and governance response to prior issues, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Fears TikTok videos may 'trigger eating disorders'

2020-06-22
BBC
Why's our monitor labelling this an incident or hazard?
TikTok's AI-powered recommendation algorithm is explicitly involved in the event, as it suggests content that can trigger or worsen eating disorders. The harm is to users' mental health, a form of injury or harm to health, which has materialized as described by users and campaigners. The AI system's use (content recommendation and moderation) has indirectly led to this harm by promoting or failing to adequately remove harmful content. This fits the definition of an AI Incident because the AI system's development and use have directly or indirectly caused harm to people's health. The article does not merely warn of potential harm but reports ongoing harm experienced by users, thus it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the issue.
Thumbnail Image

TikTok offered details about how its most popular feed works. Experts seem unimpressed.

2020-06-23
Vox
Why's our monitor labelling this an incident or hazard?
The article centers on TikTok's explanation of its recommendation algorithm and the skepticism from experts regarding the completeness and usefulness of this disclosure. While it touches on past allegations of censorship and content moderation issues, it does not present new evidence of harm caused by the AI system's development, use, or malfunction. There is no direct or indirect link to realized harm or a credible risk of harm from the information shared. The content is primarily informational and analytical, fitting the definition of Complementary Information as it enhances understanding of AI systems and their societal implications without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Health Advocates Say TikTok May Promote Eating Disorders; TikTok Reveals Algorithm Secrets

2020-06-22
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves TikTok's AI recommendation algorithm, an AI system that influences content exposure. The algorithm's use has indirectly led to harm by promoting content that glamorizes eating disorders, which can negatively impact users' mental health, especially young people. This constitutes harm to communities and health, fitting the definition of an AI Incident. The article describes realized harm and ongoing issues rather than just potential risk, so it is not merely an AI Hazard or Complementary Information. The focus is on the harmful impact of the AI system's use, justifying classification as an AI Incident.
Thumbnail Image

Juneteenth Protest Hits Social Media Platform 'TikTok' Amid Censoring Claims

2020-06-20
NBC 7 San Diego
Why's our monitor labelling this an incident or hazard?
TikTok's platform uses AI systems for content recommendation and moderation. The accusations that these AI systems are manipulating content visibility and suppressing posts from minority creators indicate an indirect harm to the rights of these users and harm to communities by limiting their expression. Although TikTok denies intentional bias and attributes some issues to technical glitches, the reported suppression and censorship claims suggest realized harm related to AI system use. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities caused by the AI system's use in content moderation and recommendation.
Thumbnail Image

TikTok chalks out its content recommendation algorithm, and the problems that it faces

2020-06-22
MediaNama
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that influences what content users see. The article details how the AI system's outputs have directly or indirectly caused harms: censorship violating rights, promotion of harmful content causing community harm, and psychological impacts from filter bubbles. These constitute violations of human rights and harm to communities, fulfilling criteria for an AI Incident. The harms are realized, not just potential, as shown by content removals, bans, and public criticism. Therefore, this event qualifies as an AI Incident.