
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
TikTok banned around 20 accounts after a BBC and Riddance investigation revealed the use of AI-generated, highly sexualized Black female avatars to promote explicit content without disclosure. The avatars, often racially stereotyped and exploitative, also appeared on Instagram, prompting Meta to investigate. The incident highlights AI misuse and community harm.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating digital avatars and videos, which are used in harmful ways including sexual exploitation, racial stereotyping, and identity theft. The AI-generated content is misleading and not properly labeled, violating platform policies and causing harm to individuals and communities. TikTok's banning of accounts confirms the recognition of harm. The harms are realized, not just potential, including violation of rights and harm to communities. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]