
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A Drexel University study reveals that widespread use of AI companion chatbots like Character.AI, Replika, and Kindroid among U.S. teens has led to psychological harm, including addiction-like dependency, disrupted sleep, academic issues, and strained relationships. Teens report difficulty disengaging from these AI systems, raising concerns about their impact on youth well-being.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI chatbots powered by large language models). The harm is realized and described as behavioral addiction with negative health and social consequences for teens, which fits the definition of injury or harm to health of a group of people. The study's findings confirm that the AI system's use has directly led to these harms. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked to the AI system's use.[AI generated]