TikTok and Instagram Ban Accounts for Unlabeled, Exploitative AI-Generated Black Female Avatars

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok banned around 20 accounts after a BBC and Riddance investigation revealed the use of AI-generated, highly sexualized Black female avatars to promote explicit content without disclosure. The avatars, often racially stereotyped and exploitative, also appeared on Instagram, prompting Meta to investigate. The incident highlights AI misuse and community harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event clearly involves AI systems generating digital avatars and videos, which are used in harmful ways including sexual exploitation, racial stereotyping, and identity theft. The AI-generated content is misleading and not properly labeled, violating platform policies and causing harm to individuals and communities. TikTok's banning of accounts confirms the recognition of harm. The harms are realized, not just potential, including violation of rights and harm to communities. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
FairnessTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI videos of sexualised black women removed from TikTok after BBC investigation

2026-03-22
BBC
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating digital avatars and videos, which are used in harmful ways including sexual exploitation, racial stereotyping, and identity theft. The AI-generated content is misleading and not properly labeled, violating platform policies and causing harm to individuals and communities. TikTok's banning of accounts confirms the recognition of harm. The harms are realized, not just potential, including violation of rights and harm to communities. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok bans accounts using AI-manipulated videos of social media users posing as black women influencers

2026-03-22
Firstpost
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating manipulated videos and images, fulfilling the AI system involvement criterion. The use of these AI-generated avatars to impersonate real people and drive traffic to explicit content without consent constitutes a violation of intellectual property rights and potentially human rights, as well as harm to the affected individuals and communities. The harm is realized, not just potential, as the manipulated content is actively disseminated and has caused reputational and personal harm. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and intellectual property rights, and harm to communities.
Thumbnail Image

TikTok bans accounts using AI to impersonate Black influencers

2026-03-22
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake avatars impersonating Black influencers in a harmful and exploitative way, which has led to violations of rights and harm to communities. The AI-generated content is used to mislead users and direct them to adult websites, causing social and ethical harm. The banning of accounts is a response to this realized harm. Hence, the AI system's use has directly led to harm, meeting the criteria for an AI Incident.
Thumbnail Image

AI videos of sexualised black women removed from TikTok after BBC investigation

2026-03-22
The Namibian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating digital avatars and videos that have directly led to harms including exploitation, racial stereotyping, misleading users, and intellectual property theft. The misuse of AI-generated content to promote sexually explicit material without consent and the failure to label AI content as required are direct causes of harm. TikTok's banning of accounts confirms the recognition of harm. The harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Social media accounts banned for using AI without disclosure

2026-03-23
Women's Agenda
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating sexually explicit content without disclosure, which breaches platform guidelines and causes harm through racist and exploitative portrayals of black women. The non-consensual use of a content creator's videos further constitutes a violation of rights. The harms described include violations of human rights, harm to communities, and gendered violence facilitated by AI-generated deepfakes and manipulated images. These harms have materialized, not just potential, making this an AI Incident rather than a hazard or complementary information.