AI-Generated Harmful Content Spreads Widely on TikTok

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers found that hundreds of TikTok accounts used generative AI tools to produce and spread anti-immigrant misinformation and sexualized content, including depictions of minors. These AI-generated posts accumulated 4.5 billion views in one month, highlighting the scale and impact of harmful AI-driven content on the platform.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (generative AI tools) used to create content that has directly led to harm: spreading anti-immigrant misinformation and sexualized depictions of females including minors, which are violations of rights and harmful to communities. The content's deceptive nature and evasion of moderation exacerbate the harm. The scale of views (billions) confirms the harm is materialized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenGeneral publicOther

Harm types
Public interestHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Anti-immigrant material among AI-generated content getting billions of views on TikTok

2025-12-03
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (generative AI tools) used to create content that has directly led to harm: spreading anti-immigrant misinformation and sexualized depictions of females including minors, which are violations of rights and harmful to communities. The content's deceptive nature and evasion of moderation exacerbate the harm. The scale of views (billions) confirms the harm is materialized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tech companies advised to label AI-generated content

2025-11-30
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The content primarily centers on government advisories, policy development, and societal/governance responses to the risks posed by AI-generated content, especially deepfakes. There is no description of a realized harm or incident caused by AI systems, nor a specific event where AI use has led to injury, rights violations, or other harms. The article's main narrative is about measures to prevent or mitigate potential harms and build trust in AI technology, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Tech firms advised to label and 'watermark' AI-generated content

2025-12-01
Khmer Times
Why's our monitor labelling this an incident or hazard?
The article centers on the government's advisory and legislative efforts to mitigate risks associated with AI-generated content, such as misinformation and deepfake abuse, which are plausible harms but not described as having directly occurred in this report. The presence of AI systems (generative AI producing deepfakes) is clear, and the potential for harm is acknowledged, but no specific AI Incident is described. The main focus is on societal and governance responses to AI risks, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Anti-immigrant material among AI-generated content getting billions of views on TikTok

2025-12-03
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating content that directly leads to harm to communities by spreading anti-immigrant narratives and sexualized depictions, including potentially exploitative material involving minors. The AI-generated content's deceptive nature and scale contribute to violations of rights and societal harm. The presence of AI systems is explicit (generative AI tools), and their use has directly led to the dissemination of harmful content. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly caused harm to communities and breaches of content policies.
Thumbnail Image

Anti-immigrant material among AI-generated content getting billions of views on TikTok

2025-12-03
Head Topics
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI tools) used to produce and distribute harmful content on a large scale, leading to realized harm to communities through misinformation, harmful stereotypes, and sexualized depictions, including of minors. The AI's use in gaming the platform's algorithm and evading moderation directly contributes to these harms. Therefore, this qualifies as an AI Incident due to the direct and indirect harm caused by the AI-generated content and its dissemination.
Thumbnail Image

TikTok's Viral AI Clips Are Quietly Spreading Hidden Anti-Immigrant Messages, Report Warns

2025-12-04
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating and distributing content with hidden anti-immigrant messages that have already reached billions of views, causing harm to communities through the spread of hate speech and discrimination. This meets the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The presence of AI-generated content and algorithmic manipulation is explicit, and the harm is realized rather than potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

TikTok is full of anti-immigrant, misogynistic AI slop

2025-12-04
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies autonomous AI systems (agentic AI accounts) generating harmful content that targets vulnerable groups, including young girls, Jewish people, and immigrants. The content includes sexualized imagery, hateful stereotypes, and fabricated news, which constitute harm to communities and violations of human rights. The AI systems' outputs have directly led to these harms by shaping TikTok feeds and influencing public perception. The lack of adequate labeling further increases the risk of harm. Hence, this qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

AI περιεχόμενο με ξενοφοβικά μηνύματα συγκεντρώνει δισεκατομμύρια προβολές στο TikTok | LiFO

2025-12-03
LiFO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating and distributing harmful content on a large scale, including xenophobic and misleading videos, which have directly led to harm to communities by spreading hate and misinformation. The AI's role is central in creating and amplifying this content, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as billions of views indicate widespread dissemination and impact. The event is not merely about AI development or potential misuse but documents actual harm caused by AI-generated content.
Thumbnail Image

Xαμός στο TikTok: Αντιμεταναστευτικό περιεχόμενο τεχνητής νοημοσύνης κατακλύζει την πλατφόρμα

2025-12-03
reader.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used to spread harmful anti-immigrant narratives and fake news on TikTok, which has already caused harm to communities by spreading misinformation and potentially violating rights. The AI systems' use in generating and amplifying this content directly led to these harms. Although TikTok disputes the claims, the report and evidence of billions of views and thousands of posts indicate realized harm. Hence, this is an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Αντιμεταναστευτικό περιεχόμενο που έχει δημιουργηθεί με τεχνητή νοημοσύνη κατακλύζει το TikTok

2025-12-03
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI tools to create content that is anti-immigrant and sexualized, including fake news and misleading depictions. This content has been widely viewed and has caused harm to communities by spreading misinformation and potentially violating rights related to the depiction of minors. The AI systems' outputs have directly contributed to these harms by manipulating the platform's algorithm and evading detection for months. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Ανησυχητική έκθεση: To TikTok κατακλύζεται από ΑΙ περιεχόμενο - Dnews

2025-12-03
dnews.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and disseminate harmful content on a large scale, which has directly led to harms such as misinformation, sexualized content involving minors, and manipulation of users. These constitute violations of rights and harm to communities. Therefore, this qualifies as an AI Incident. The article does not merely discuss potential future harm or general AI developments but documents realized harms caused by AI-generated content on a major platform.
Thumbnail Image

Αντιμεταναστευτικό περιεχόμενο που έχει δημιουργηθεί με τεχνητή νοημοσύνη κατακλύζει το TikTok

2025-12-03
KontraNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating harmful content that is widely disseminated on a major social media platform. The harms include misinformation with anti-immigrant narratives and sexualized depictions of women and minors, which can harm communities and violate rights. The AI systems' outputs have directly led to these harms by manipulating users and spreading misleading content. The failure to properly label and moderate this content exacerbates the harm. Thus, this meets the criteria for an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Centinaia di account su TikTok diffondono video generati dall'IA per creare odio contro i migranti

2025-12-04
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for video creation and bot accounts) to produce and disseminate harmful content targeting migrants, which has already caused significant harm to communities by spreading hate and misinformation. The AI system's role is pivotal in generating and amplifying this content, and the harm is realized as billions of views have been accumulated. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to communities through hate speech and misinformation.