YouTube's AI Algorithms Flood Children’s Feeds with Harmful AI-Generated Videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Investigations reveal YouTube's AI-driven recommendation system systematically promotes low-quality, misleading, and developmentally inappropriate AI-generated videos to children. These videos, often disguised as educational, feature distorted visuals and misinformation, raising concerns about cognitive and emotional harm to young viewers. YouTube has removed some content, but the issue persists.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems: AI tools generating videos and YouTube's AI recommendation algorithm promoting them. The harm described is cognitive overload, misinformation, and developmental disruption in young children, which is a direct harm to health and communities. The harm is occurring as children are actively exposed to and influenced by these videos. The article provides concrete examples and expert opinions on the negative impact, fulfilling the criteria for an AI Incident. Although some mitigation steps are mentioned, the primary focus is on the harm caused, not on the response, so it is not Complementary Information. The harm is realized, not just potential, so it is not an AI Hazard. Therefore, the event is best classified as an AI Incident.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

How A.I.-Generated Videos Are Distorting Your Child's YouTube Feed

2026-02-26
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems: AI tools generating videos and YouTube's AI recommendation algorithm promoting them. The harm described is cognitive overload, misinformation, and developmental disruption in young children, which is a direct harm to health and communities. The harm is occurring as children are actively exposed to and influenced by these videos. The article provides concrete examples and expert opinions on the negative impact, fulfilling the criteria for an AI Incident. Although some mitigation steps are mentioned, the primary focus is on the harm caused, not on the response, so it is not Complementary Information. The harm is realized, not just potential, so it is not an AI Hazard. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Study finds YouTube algorithm pushing AI-generated 'Junk' content to children

2026-02-27
GEO TV
Why's our monitor labelling this an incident or hazard?
The YouTube recommendation algorithm is an AI system that influences what content users, including children, see. The study shows that this AI system is recommending low-quality, misleading AI-generated videos to children, which is a form of harm to communities and possibly to children's health or development. The platform's response to remove offending channels and videos confirms the recognition of harm caused. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through the spread of false and low-quality content to a vulnerable group.
Thumbnail Image

Investigation finds YouTube is serving mindless AI slop to toddlers and preschoolers

2026-02-27
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: YouTube's recommendation algorithm that uses AI to select and promote videos. The AI system's use has directly led to harm by flooding young children with inappropriate, nonsensical AI-generated content that undermines their learning and development, which is a form of harm to communities and potentially child safety. The harm is realized and ongoing, not merely potential. The investigation shows the AI system's prioritization of quantity over quality, leading to widespread exposure of harmful content. YouTube's partial and reactive mitigation does not negate the fact that harm has occurred. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

YouTube Flooding Children's Feeds with AI Slop, Investigation Finds

2026-02-27
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies an AI system (YouTube's recommendation algorithm and AI-generated video content) as the cause of harm by flooding children's feeds with low-quality, cognitively harmful content. The harm is realized and ongoing, affecting children's cognitive development and well-being, which fits the definition of an AI Incident. The AI system's use and malfunction in prioritizing ad revenue over child-appropriate content directly leads to harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

YouTube steers children towards AI-made videos disguised as educational content, report finds - https://eutoday.net

2026-02-27
eutoday.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (YouTube's recommendation engine) that is actively promoting AI-generated videos to children, leading to exposure to misleading and low-quality content. This exposure constitutes harm to children, a vulnerable group, fulfilling the harm criteria (a) and (d). The AI system's role is pivotal in surfacing this content systematically, not merely incidentally. The platform's failure to adequately label or moderate this content further implicates the AI system's use in causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How A.I.-Generated Videos Are Distorting Your Child's YouTube Feed

2026-02-26
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems: AI-generated video content and YouTube's AI-driven recommendation algorithms. The harm is realized and ongoing, as children are exposed to misleading, cognitively overloading, and developmentally inappropriate content, which can harm their cognitive and emotional development (harm to health and communities). The AI systems' use directly leads to these harms by pushing and generating such content. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a group of people (children). The article also discusses societal and platform responses, but the primary focus is on the harm caused by AI-generated content and recommendation algorithms.