TikTok's AI Moderation Fails to Curb Antisemitic Content, Prompting High-Level Talks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Israeli President Isaac Herzog met with TikTok executives to address a surge in antisemitic and anti-Israel content on the platform. Despite AI-driven moderation, harmful content, including hate speech and misinformation, remained online, highlighting failures in TikTok's AI systems and resulting in harm to affected communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

TikTok uses AI systems for content recommendation and moderation. The presence of antisemitic and harmful content that was either removed only after a delay or not removed at all indicates a failure or inadequacy in the AI system's moderation capabilities, which has directly led to harm to communities by allowing hate speech and misinformation to persist. This fits the definition of an AI Incident as the AI system's use and malfunction (ineffective moderation) has directly led to harm to communities.[AI generated]
AI principles
FairnessRespect of human rightsAccountabilityRobustness & digital securitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

Business function:
Other

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

TikTok execs meet Israel's president in Jerusalem to discuss its antisemitism problem

2024-02-07
ThePrint
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content recommendation and moderation. The presence of antisemitic and harmful content that was either removed only after a delay or not removed at all indicates a failure or inadequacy in the AI system's moderation capabilities, which has directly led to harm to communities by allowing hate speech and misinformation to persist. This fits the definition of an AI Incident as the AI system's use and malfunction (ineffective moderation) has directly led to harm to communities.
Thumbnail Image

'We must fight lies and hatred': Israeli President discusses antisemitism with TikTok executives

2024-02-06
ThePrint
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-driven platform that uses AI systems for content recommendation and moderation. The article highlights the presence of harmful antisemitic content and the challenges in moderating it, which relates to AI system use. However, no specific AI Incident (direct or indirect harm caused by AI) or AI Hazard (plausible future harm) is described. The main focus is on the discussion between the Israeli President and TikTok executives about combating misinformation and hate, and on the resignation of a lobbyist citing bias concerns. This fits the definition of Complementary Information, as it provides societal and governance context and responses related to AI systems but does not report a new incident or hazard.
Thumbnail Image

Herzog meets with TikTok officials amid sharp rise of antisemitic, anti-Israel content on platform

2024-02-06
The Times of Israel
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content moderation and recommendation. The article highlights that antisemitic and anti-Israel content has proliferated on the platform, with some harmful content remaining despite removal efforts. This indicates a malfunction or failure in the AI moderation system, leading to harm to communities through the spread of hate speech and misinformation. The direct link between the AI system's use and the harm caused by the content's circulation meets the criteria for an AI Incident. The meeting and research findings confirm the AI system's role in the harm, not just potential future harm, thus excluding classification as a hazard or complementary information.
Thumbnail Image

President Herzog meets with TikTok senior global management

2024-02-07
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system implicitly, as TikTok's content moderation and detection of fake accounts and hate speech rely on AI-driven algorithms for content analysis and account identification. The presence of 160 million fake accounts spreading antisemitic rhetoric and the platform's partial failure to remove all harmful content indicates that the AI system's use or malfunction has indirectly led to harm to communities through the spread of hate and misinformation. Therefore, this qualifies as an AI Incident due to violations of human rights and harm to communities caused by the AI system's role in content moderation and dissemination.
Thumbnail Image

'We must fight lies and hatred': Israeli President discusses antisemitism with TikTok executives | International

2024-02-06
Devdiscourse
Why's our monitor labelling this an incident or hazard?
While TikTok uses AI systems for content moderation and detection of fake accounts, the article does not explicitly mention AI system failures or misuse leading directly to harm. The discussion centers on the presence of harmful content and the platform's moderation policies, which is a broader governance and societal issue rather than a specific AI Incident or Hazard. The article mainly provides context on ongoing challenges and responses related to AI-driven content moderation and misinformation, fitting the definition of Complementary Information as it supports understanding of AI's role in content moderation and societal impacts without describing a new incident or hazard.
Thumbnail Image

World News | TikTok Execs Meet Israel's President in Jerusalem to Discuss Its Antisemitism Problem | LatestLY

2024-02-07
LatestLY
Why's our monitor labelling this an incident or hazard?
The article centers on TikTok's AI-driven content moderation challenges and the presence of harmful content, but it does not report a specific AI Incident or AI Hazard. The meeting and the executives' pledge to combat antisemitism represent a governance and societal response to known issues, fitting the definition of Complementary Information. There is no direct or indirect harm newly caused by AI systems reported here, nor a plausible future harm scenario beyond what is already known. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

'We must fight lies and hatred': Israeli President discusses antisemitism with TikTok executives

2024-02-06
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
TikTok is a social media platform that uses AI systems for content recommendation, moderation, and detection of fake accounts. The presence and spread of antisemitic and hateful content, as well as fake news, directly harm communities by promoting hatred and misinformation. The article states that harmful content has been uploaded and remained online for extended periods, indicating realized harm. The AI systems involved in content moderation and account detection are central to the issue, as their effectiveness and biases influence the spread of harmful content. Therefore, this event qualifies as an AI Incident due to the direct harm to communities caused by AI system use and malfunction in content moderation and fake account detection.
Thumbnail Image

TikTok execs meet Israel's president in Jerusalem to discuss its antisemitism problem

2024-02-07
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through TikTok's use of AI for content moderation and detection of fake accounts spreading antisemitic and hateful content. The harm described (antisemitism, fake news, hate speech) is occurring on the platform, but the article does not attribute direct or indirect causation of harm to AI system malfunction or misuse. Instead, it focuses on the platform's response and cooperation with authorities to mitigate these issues. This fits the definition of Complementary Information, as it details governance and societal responses to AI-related challenges rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Herzog, TikTok Execs Discuss Rising Antisemitism on Social Media Platform

2024-02-06
The Jewish Press - JewishPress.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as TikTok's content moderation and recommendation algorithms are AI-based and play a role in the spread and management of harmful content. However, the event is primarily a discussion and presentation of research findings about existing harmful content and platform responses, without describing a new AI incident or a plausible future hazard. It is best classified as Complementary Information because it provides context and updates on societal and governance responses to AI-related harms on social media platforms.
Thumbnail Image

President Herzog discusses rise of antisemitism with TikTok execs - I24NEWS

2024-02-07
i24NEWS English
Why's our monitor labelling this an incident or hazard?
TikTok's platform uses AI systems for content recommendation and moderation. The spread of antisemitic hate speech and conspiracy theories, including Holocaust denial and graphic hateful content, directly harms communities and violates human rights. The identification of a large number of fake accounts spreading such content shows the AI system's role in enabling or failing to prevent this harm. The event describes ongoing harm caused by AI system use, meeting the criteria for an AI Incident.