AI-Generated Videos on YouTube Kids Raise Alarms Over Child Development Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated, low-quality videos are flooding YouTube Kids, targeting children under two and raising concerns among experts about potential developmental harm and impaired reality perception. Despite YouTube's efforts to penalize such content, these videos remain widespread, exploiting vulnerable young viewers and generating significant revenue for creators.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems being used to generate infant content that is widely consumed by infants under 2 years old, a group vulnerable to developmental harm. Experts and child advocacy groups warn that this content may cause slowed brain development and impaired reality perception, which are harms to health and development. The AI systems' role in producing and enabling the proliferation of such content is central. The algorithmic recommendation environment further exacerbates exposure. Since the harm is occurring and linked to AI-generated content, this is an AI Incident.[AI generated]
AI principles
AccountabilitySafetyHuman wellbeingTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI-Generated Infant Content Explosion Worries Experts

2025-12-04
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to generate infant content that is widely consumed by infants under 2 years old, a group vulnerable to developmental harm. Experts and child advocacy groups warn that this content may cause slowed brain development and impaired reality perception, which are harms to health and development. The AI systems' role in producing and enabling the proliferation of such content is central. The algorithmic recommendation environment further exacerbates exposure. Since the harm is occurring and linked to AI-generated content, this is an AI Incident.
Thumbnail Image

Expert's Urgent Warning About YouTube 'AI Slop' That Could Rewire Your Child's Brain

2025-12-06
Men's Journal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-generated video content and YouTube's recommendation algorithms. The harm is developmental and cognitive impact on children, which qualifies as harm to health or communities. Since the harm is potential and plausible but not confirmed as having occurred, this fits the definition of an AI Hazard rather than an AI Incident. The article also includes a response from YouTube about their algorithm policies, but this is part of the context rather than a separate complementary information focus. Therefore, the classification is AI Hazard.
Thumbnail Image

"Warped beyond comprehension": Experts fear how AI slop could impact early brain development

2025-12-05
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos flooding YouTube and their potential impact on toddlers' brain development and sense of reality. The AI system's use (generative AI creating video content) is central to the concern. Although no direct harm is reported, experts warn about plausible future harm to children's development due to exposure to misleading AI content. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no realized harm (AI Incident) is documented. The article also discusses platform responses but focuses mainly on the potential risk rather than a realized incident or a governance response, so it is not Complementary Information.
Thumbnail Image

AI-generated videos are taking over feeds of YouTube Kids accounts - raising concern

2025-12-05
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating content that is reaching and affecting young children, a vulnerable population. The harm is developmental impairment and exposure to disturbing, low-quality content, which is a form of harm to communities and health. The AI-generated videos are actively causing harm by exploiting young viewers and impacting their development. Although YouTube is attempting to mitigate this, the harm is ongoing. Hence, this event meets the criteria for an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

YouTube Creators are now making AI slop for babies: Report

2025-12-06
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being produced and consumed by babies and toddlers, which could negatively impact their development and well-being. This constitutes indirect harm to health of a vulnerable group caused by the use of AI systems in content generation. The harm is occurring or ongoing, not just a potential risk, as the content is already being consumed. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to AI system use.