AI Algorithms on Social Media Linked to Child Mental Health Harm in Croatia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-driven recommendation algorithms on social media platforms are causing significant mental health issues among children in Croatia, including depression, anxiety, and exposure to harmful content. Public concern has led to legislative initiatives and petitions to ban social media use for those under 15.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly through the discussion of social media algorithms that recommend harmful content, which has directly led to mental health harms among children and young people. The article describes realized harm (mental health deterioration, eating disorders) caused indirectly by AI-driven content recommendation algorithms. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm to a group of people (children and youth).[AI generated]
AI principles
Human wellbeingSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Kekin: Djevojčice na društvenim mrežama dobivaju sadržaj o ekstremnom mršavljenju

2026-02-18
IndexHR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the discussion of social media algorithms that recommend harmful content, which has directly led to mental health harms among children and young people. The article describes realized harm (mental health deterioration, eating disorders) caused indirectly by AI-driven content recommendation algorithms. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm to a group of people (children and youth).
Thumbnail Image

Ivana Kekin kod Mojmire upozorila na veliki problem u Hrvatskoj: 'Evo što nude našoj djeci'

2026-02-18
Net.hr
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies the social media recommendation algorithm as a key mechanism causing harm by keeping children engaged with content that negatively impacts their mental health. The harms described include increased depression, anxiety, eating disorders, and exposure to inappropriate content, which fall under harm to health and harm to communities. The AI system's use (the algorithm) is directly linked to these harms. The discussion of legislative responses and societal concern supports the recognition of this as an AI Incident rather than a mere hazard or complementary information. The presence of realized harm and the AI system's pivotal role in causing it justify classification as an AI Incident.
Thumbnail Image

Kekin: Algoritam je skrojen tako da svakog, pa i djecu, zadrži na društvenoj mreži - kao svaka ovisnička industrija

2026-02-18
miss7mama.24sata.hr
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies the role of AI algorithms in social media platforms as a direct cause of harm to children's mental health, including depression, anxiety, and exposure to harmful content. This constitutes a violation of health and well-being (harm to persons) and harm to communities. Since the harm is occurring and the AI system's use is central to causing this harm, this qualifies as an AI Incident. The article also discusses societal and legislative responses, but the primary focus is on the realized harm caused by AI systems.
Thumbnail Image

'Evo što nude našoj djeci': Ivana Kekin kod Mojmire upozorila na veliki problem u Hrvatskoj | Riportal

2026-02-18
riportal.net.hr
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies the social media algorithms as AI systems that are causing harm by promoting addictive and harmful content to children, leading to mental health issues such as depression, anxiety, and eating disorders. The harm is realized and ongoing, not just potential. The discussion of legislative responses and public concern supports the significance of the harm. Therefore, this event meets the criteria for an AI Incident due to direct harm to health and communities caused by the AI system's use.