Study Finds AI Chatbots Causing Addiction-Like Harm Among U.S. Teens

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Drexel University study reveals that widespread use of AI companion chatbots like Character.AI, Replika, and Kindroid among U.S. teens has led to psychological harm, including addiction-like dependency, disrupted sleep, academic issues, and strained relationships. Teens report difficulty disengaging from these AI systems, raising concerns about their impact on youth well-being.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (AI chatbots powered by large language models). The harm is realized and described as behavioral addiction with negative health and social consequences for teens, which fits the definition of injury or harm to health of a group of people. The study's findings confirm that the AI system's use has directly led to these harms. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked to the AI system's use.[AI generated]
AI principles
Human wellbeingSafety

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Teens Struggle to Break Up with Their AI Chatbots - Neuroscience News

2026-04-13
Neuroscience News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI chatbots powered by large language models). The harm is realized and described as behavioral addiction with negative health and social consequences for teens, which fits the definition of injury or harm to health of a group of people. The study's findings confirm that the AI system's use has directly led to these harms. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked to the AI system's use.
Thumbnail Image

Study warns of rising teen dependency on AI companions

2026-04-14
News-Medical.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (companion chatbots powered by large language models) whose use has directly led to harm to individuals' health and wellbeing (psychological harm, addiction-like behavior, disruption of daily life). This fits the definition of an AI Incident because the AI system's use has directly caused harm to a group of people (teens). The article does not merely warn of potential harm but documents realized harm based on user reports and research findings. Therefore, the classification is AI Incident.
Thumbnail Image

Teens are becoming concerned about their attachment to AI chatbots

2026-04-13
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language model-based companion chatbots) whose use by teens has directly led to psychological and social harms consistent with behavioral addiction. The study provides evidence of realized harm, including disrupted sleep, academic difficulties, and strained relationships, which fall under harm to health and communities. The AI system's role is pivotal as the addiction-like attachment is to the AI chatbots themselves. This meets the criteria for an AI Incident rather than a hazard or complementary information, as the harm is occurring and documented.
Thumbnail Image

Teens Worry Over Growing Attachment to AI Chatbots

2026-04-13
Mirage News
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (large language model chatbots) and discusses harms (behavioral addiction, psychological and social impacts) linked to their use by teens. These harms fall under harm to health and wellbeing (a) and harm to communities (d). However, the article does not describe a specific AI Incident where harm has directly or indirectly occurred in a particular event, nor does it describe a plausible future harm event (AI Hazard). Instead, it reports on a research study analyzing user posts and provides a design framework to mitigate these harms. This fits the definition of Complementary Information, as it enhances understanding of AI harms and responses without reporting a new incident or hazard.
Thumbnail Image

Teens Growing Increasingly Concerned About Their Bond with AI Chatbots

2026-04-13
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (companion chatbots based on large language models) whose use has directly led to harm to the health and well-being of teenagers, fulfilling the criteria for an AI Incident. The harm is psychological addiction and social disruption, which falls under injury or harm to health and harm to communities. The article describes realized harm based on self-reported user experiences, not just potential risk, and thus qualifies as an AI Incident rather than a hazard or complementary information.