TikTok Scales Back AI Video Summaries After Generating Bizarre Errors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok scaled back its experimental AI Overviews feature after it generated wildly inaccurate and bizarre video summaries, such as describing Charli D'Amelio as a "collection of blueberries." The malfunction led to widespread misinformation and reputational harm, prompting TikTok to limit the feature's scope.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the AI-generated video overview feature) that malfunctioned by producing incorrect and misleading content summaries. This malfunction directly led to harm in the form of misinformation and reputational damage to individuals featured in the videos. Since the harm has occurred and the AI system's malfunction is central to the issue, this qualifies as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

TikTok rows back on AI video overviews in US after absurd errors

2026-05-08
BBC
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates video summaries. The errors caused incorrect and sometimes absurd descriptions, which can be considered misinformation or misleading content, a form of harm to communities or informational harm. However, the article does not report any direct or indirect harm such as injury, rights violations, or significant community harm occurring due to these AI summaries. TikTok's rollback and adjustments show a response to these issues, fitting the definition of Complementary Information. There is no indication that the AI system's malfunction could plausibly lead to more serious harm beyond what is described, so it does not meet the threshold for AI Hazard or AI Incident.
Thumbnail Image

TikTok scales back AI-generated video overviews after absurd errors

2026-05-08
BBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-generated video overview feature) that malfunctioned by producing incorrect and misleading content summaries. This malfunction directly led to harm in the form of misinformation and reputational damage to individuals featured in the videos. Since the harm has occurred and the AI system's malfunction is central to the issue, this qualifies as an AI Incident.
Thumbnail Image

TikTok pulls back on an AI feature that described Charli D'Amelio as a collection of blueberries

2026-05-06
Business Insider
Why's our monitor labelling this an incident or hazard?
The AI system was involved in generating inaccurate content summaries (hallucinations), which is a malfunction or misuse of AI outputs. However, the harm described is limited to misleading or incorrect video descriptions without evidence of significant harm to health, rights, infrastructure, or communities. The company is actively responding by pulling back and modifying the feature, indicating ongoing mitigation. The event does not describe realized harm meeting the AI Incident criteria nor a plausible future harm scenario that would qualify as an AI Hazard. Instead, it provides an update on AI system performance and company response, fitting the definition of Complementary Information.
Thumbnail Image

TikTok's AI Overviews Probably Thinks This Story Is a Blueberry

2026-05-08
CNET
Why's our monitor labelling this an incident or hazard?
The AI system (TikTok's AI Overviews) is explicitly mentioned and involved in generating inaccurate content summaries. However, the inaccuracies, while notable, do not appear to have caused any significant harm as defined (e.g., injury, rights violations, or disruption). The event reports on the system's malfunction and the company's mitigation steps (scaling back the feature), which aligns with providing supporting information about AI system performance and governance responses. There is no credible indication of plausible future harm beyond minor misinformation, so it does not meet the threshold for an AI Hazard. Hence, the event is Complementary Information.
Thumbnail Image

TikTok scales back AI-generated video descriptions after absurd errors

2026-05-09
Ada Derana
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as TikTok's AI-generated video description feature. The AI malfunctioned by producing absurd and inaccurate summaries, which is a clear case of AI malfunction. While the harm is primarily reputational and informational, it does not rise to the level of injury, rights violations, or significant community harm as defined. The company's action to scale back the feature and address errors is a response to the malfunction. Therefore, this event is best classified as Complementary Information, providing an update on AI system performance and mitigation efforts rather than reporting a direct or plausible harm incident or hazard.
Thumbnail Image

TikTok scales back AI-generated video descriptions after absurd errors

2026-05-08
Capital FM Kenya
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating video descriptions that were factually incorrect and bizarre, directly leading to misinformation and potential reputational harm. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to communities (misinformation) and possibly individuals (celebrity misrepresentation). The company's response to scale back the feature is a mitigation step but does not negate the incident classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

TikTok Pulls Back AI Overviews After Viral Errors Turn Videos Into 'Blueberries'

2026-05-08
Tech Times
Why's our monitor labelling this an incident or hazard?
TikTok's AI Overviews system is an AI system generating video content summaries. The AI's incorrect outputs caused widespread misinformation and user confusion, which can be considered harm to communities through misinformation and erosion of trust. This harm is realized, not just potential, as viral AI errors were shared and discussed widely. The event involves the AI system's malfunction (incorrect summaries) directly leading to this harm. Therefore, it qualifies as an AI Incident. The company's response to scale back the feature is complementary information but does not negate the incident classification.
Thumbnail Image

TikTok scales back AI tool after it describes Charli D'Amelio as 'collection of blueberries' - Tech Digest

2026-05-08
Tech Digest
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as generating false and bizarre descriptions of videos, including mischaracterizing people and actions. This constitutes a malfunction of the AI system's use, which directly led to harm in the form of misinformation and reputational damage to individuals (e.g., Charli D'Amelio described as 'collection of blueberries'). The harm is realized and materialized, not just potential. Therefore, this qualifies as an AI Incident under the definition of harm to communities and individuals through misinformation and reputational harm caused by AI malfunction.