AI-Generated Misinformation Spreads on TikTok via Viral Conspiracy Videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

On TikTok, users are leveraging generative AI to create synthetic voices and images for conspiracy theory videos, which are widely viewed and monetized. This AI-driven misinformation is causing harm by misleading communities and influencing public opinion, while TikTok struggles to effectively moderate and remove such harmful content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly through the use of AI-generated voices and images to produce conspiracy theory videos. These videos are widely viewed and monetized, indicating active use of AI in content creation. The harm is realized as misinformation and conspiracy theories spread on TikTok, impacting communities by fostering false beliefs and potential social disruption. The AI system's involvement is direct and pivotal in enabling the creation and dissemination of this harmful content. Hence, this meets the criteria for an AI Incident, as the AI system's use has directly led to harm to communities through misinformation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomyRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"Nous allons probablement tous mourir": Tiktok devient le paradis des complotistes grâce à l'IA

2024-03-18
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated voices and images to produce conspiracy theory videos. These videos are widely viewed and monetized, indicating active use of AI in content creation. The harm is realized as misinformation and conspiracy theories spread on TikTok, impacting communities by fostering false beliefs and potential social disruption. The AI system's involvement is direct and pivotal in enabling the creation and dissemination of this harmful content. Hence, this meets the criteria for an AI Incident, as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

La désinformation à la mode IA en vogue sur TikTok

2024-03-18
Le Journal de Montreal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic voices and images to produce conspiracy theory videos that spread misinformation on a large social media platform. This misinformation causes harm to communities by fostering false beliefs and potentially dangerous narratives. The AI's role in creating and amplifying this content is pivotal, and the harm is realized as these videos have millions of views and influence public perception. Hence, this qualifies as an AI Incident due to direct harm caused by AI-generated disinformation.
Thumbnail Image

King Kong, vampires, astéroïdes... : la désinformation et les théories du complot à la mode IA en vogue sur TikTok

2024-03-18
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of generative AI to create synthetic voices and images that propagate conspiracy theories and misinformation on TikTok. The misinformation is actively spreading and monetized, causing harm to communities by misleading users and potentially influencing public opinion negatively. The article details the direct use of AI-generated content in causing this harm, fulfilling the criteria for an AI Incident. Although TikTok claims to remove harmful content, the presence of viral AI-generated conspiracy videos indicates realized harm. Hence, this is not merely a potential hazard or complementary information but an actual incident involving AI systems causing harm.
Thumbnail Image

King Kong, vampires, astéroïdes: la désinformation à la mode IA sur TikTok

2024-03-18
Le Matin
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI for voice and image synthesis) used to create and spread conspiracy theories on TikTok. The misinformation is actively disseminated and monetized, causing harm to communities by spreading false narratives that can mislead and create social harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (harm to communities through misinformation). The article also discusses the platform's response and regulatory context, but the primary focus is on the realized harm caused by AI-generated misinformation, not just potential harm or complementary information.
Thumbnail Image

TikTok : quand et pourquoi les Etats-Unis pourraient interdire le réseau social ?

2024-03-15
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
Although TikTok uses AI systems for content recommendation and moderation, the article does not report any incident or harm caused by these AI systems, nor does it highlight a credible risk of AI-driven harm. The main focus is on political and legal actions concerning data privacy and foreign ownership, which is a governance issue. Therefore, this is best classified as Complementary Information, providing context on societal and governance responses related to AI-enabled platforms but not describing an AI Incident or AI Hazard.
Thumbnail Image

Le cas TikTok : l'Europe au milieu des guerres d'influences entre USA et Chine

2024-03-17
La Libre.be
Why's our monitor labelling this an incident or hazard?
The article centers on the geopolitical and regulatory debate around TikTok, focusing on potential risks of data capture and AI-enabled manipulation of opinion, especially during elections. While these risks are plausible and significant, the article does not describe a specific event where AI use by TikTok has directly or indirectly caused harm. It also discusses European legislative efforts and the complexity of AI-driven platforms, which aligns with providing complementary information about AI governance and societal concerns. Hence, the article fits best as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Nhiều thuyết âm mưu mới về Ngày Tận thế đang lan truyền

2024-03-18
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated voices and AI tools to create conspiracy theory videos that are widely viewed and monetized on TikTok. The misinformation spread by these videos can harm communities by fostering false beliefs and social disruption. The AI system's involvement in generating the content is direct and central to the harm caused. Hence, this event meets the criteria for an AI Incident as it involves realized harm (misinformation causing harm to communities) directly linked to AI system use.
Thumbnail Image

Tràn lan nội dung độc hại do AI sáng tạo trên TikTok

2024-03-18
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as generating harmful content (e.g., AI-generated voices and videos) that spread misinformation and conspiracy theories on TikTok. This misinformation is actively disseminated and monetized, causing harm to communities by promoting false narratives and potentially influencing public opinion and social stability. The article reports realized harm rather than just potential risk, fulfilling the criteria for an AI Incident. The involvement of AI in content creation and the resulting social harm align with the definition of AI Incident under harm to communities.
Thumbnail Image

Ngày tận thế lan truyền chóng mặt trên TikTok, chuyên gia giải mã sao?

2024-03-20
Kienthuc.net.vn
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is evident in the creation of synthetic voices and AI-generated images used in conspiracy videos, and the AI chatbot Bard's speculative predictions. The article describes the use of AI in spreading misinformation and emotional manipulation, which could plausibly lead to harm to communities through misinformation and social disruption. However, no direct or indirect harm has been reported as having occurred yet. The article also discusses legislative and platform responses, which are complementary information. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future harm from AI-enabled misinformation and manipulation on social media platforms.
Thumbnail Image

Tràn lan nội dung độc hại do AI sáng tạo trên TikTok

2024-03-18
Đọc báo tin tức, tin mới Ngày nay Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as AI is used to generate synthetic voices and content that spread harmful conspiracy theories on TikTok. The use of AI-generated content directly leads to the dissemination of misinformation, which harms communities by spreading false and potentially dangerous narratives. The financial incentives further encourage the production of such harmful AI-generated content. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through misinformation dissemination. The article also references policy and governance responses, but the primary focus is on the ongoing harm caused by AI-generated misinformation, not just complementary information or potential hazards.