AI-Generated Low-Quality Videos Flood YouTube, Raising Community Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated videos featuring bizarre and low-quality content have rapidly proliferated on YouTube, with several of the fastest-growing channels relying solely on such material. This trend degrades user experience and misleads viewers, prompting YouTube to remove some channels and cut their revenue in response to community harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved in estimating user age based on behavioral data. The system's use could plausibly lead to harm related to privacy and sensitive personal data exposure, especially if verification processes require submission of biometric data or ID documents. Although users express concern and discomfort, no direct harm such as data leaks or rights violations has been reported. The article focuses on the deployment and potential risks rather than an actual incident of harm. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but has not yet done so.[AI generated]
AI principles
AccountabilityHuman wellbeingTransparency & explainabilitySafetyRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputational

Severity
AI hazard

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Khán giả lo ngại rò rỉ thông tin cá nhân nhạy cảm khi YouTube dùng AI xác minh độ tuổi

2025-08-14
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in estimating user age based on behavioral data. The system's use could plausibly lead to harm related to privacy and sensitive personal data exposure, especially if verification processes require submission of biometric data or ID documents. Although users express concern and discomfort, no direct harm such as data leaks or rights violations has been reported. The article focuses on the deployment and potential risks rather than an actual incident of harm. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but has not yet done so.
Thumbnail Image

YouTube ngập 'rác'

2025-08-14
Báo điện tử Tiền Phong
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating large volumes of low-quality, bizarre videos that attract millions of views, which harms the online community's experience and could mislead viewers. This constitutes harm to communities and informational harm. However, the article does not describe a particular AI Incident with a specific harmful event or outcome but rather an ongoing systemic issue and the platform's policy response. Therefore, it is best classified as Complementary Information, as it provides context and updates on societal and governance responses to AI-generated content issues rather than reporting a discrete AI Incident or AI Hazard.
Thumbnail Image

YouTube thử nghiệm dùng AI quản lý người dùng theo tuổi

2025-08-12
VietnamPlus
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing user behavior to infer age and enforce content restrictions. The system's use is intended to protect minors, which aligns with harm prevention. However, no actual harm or incident resulting from the AI system's malfunction or misuse is reported. The concerns about privacy and freedom of expression are potential issues but not documented harms yet. Hence, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as privacy violations or wrongful content restriction in the future.
Thumbnail Image

'Mèo ngoại tình' xâm chiếm YouTube!

2025-08-13
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating video content that is low quality and proliferating rapidly on YouTube and other platforms, directly causing harm to the online community by degrading content quality and user experience. YouTube's response to remove channels and cut revenue confirms the harm is recognized and ongoing. The AI involvement is explicit and central to the issue. The harm fits within the definition of harm to communities and environment (digital environment). Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

YouTube ngập "rác"

2025-08-14
Kenh14.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating large volumes of low-quality, bizarre videos that negatively impact user experience and information quality, which constitutes harm to communities. However, it does not describe a specific AI Incident where harm has directly or indirectly occurred in a discrete event, nor does it describe a plausible future harm scenario without current harm. Instead, it focuses on the broader phenomenon and YouTube's policy changes to address it. This aligns with the definition of Complementary Information, which includes governance responses and updates on AI-related harms without reporting a new incident or hazard.