Study Links Prolonged Use of AI Chatbot Replika to Increased Anxiety and Mental Health Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study by Aalto University in Finland found that prolonged use of the AI chatbot Replika, designed for emotional support, can worsen users' anxiety, depression, and social isolation. Analysis of Reddit posts and interviews revealed increased signs of mental health deterioration among users over time.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Replika chatbot) whose use has been studied and found to have negative mental health impacts on users over time. The harm is to the health of persons (mental health deterioration), which fits the definition of an AI Incident. The harm is realized (not just potential), and the AI system's use is directly linked to this harm. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Human wellbeingSafety

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

A largo plazo la IA nos deprime

2026-04-08
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Replika chatbot) whose use has been studied and found to have negative mental health impacts on users over time. The harm is to the health of persons (mental health deterioration), which fits the definition of an AI Incident. The harm is realized (not just potential), and the AI system's use is directly linked to this harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Los asistentes virtuales con IA pueden agravar la ansiedad del usuario, según un estudio

2026-04-07
Yahoo!
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Replika, an AI chatbot) whose use has been linked through research to negative mental health outcomes, constituting harm to persons. The harm is realized and documented, not merely potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to health. The article does not describe a future risk or a response but reports on realized harm evidenced by the study.
Thumbnail Image

Estudio: el peligro de la inteligencia artificial como asistente emocional del usuario

2026-04-08
mdz
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (virtual assistants based on AI) as emotional support tools. The study's findings indicate that this use has directly led to harm to users' mental health, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to the health of a person or groups of people. The article describes realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's use, not on responses or updates. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Usar chatbots de IA puede aumentar la ansiedad a largo plazo

2026-04-08
ABC Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Replika chatbot) whose use by nearly 2,000 users was studied over two years. The findings indicate that prolonged interaction with this AI system correlates with increased signs of mental health issues, including anxiety and suicidal thoughts, which are harms to health. This meets the definition of an AI Incident as the AI system's use has indirectly led to harm to persons. The article does not merely warn of potential harm but reports observed harm in user behavior and language, thus excluding classification as an AI Hazard or Complementary Information.
Thumbnail Image

Interacción con asistentes virtuales: estudio advierte riesgos por uso excesivo y dependencia emocional

2026-04-07
Prensa Libre
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly mentioned and is central to the study. The harm identified is psychological distress and worsening mental health conditions among users, which qualifies as injury or harm to health under the AI Incident definition. The harm is indirect, as the AI system's prolonged use correlates with increased anxiety, depression, and suicidal thoughts. The article reports realized harm based on data analysis, not just potential risk. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Asistentes virtuales con inteligencia artificial pueden agravar la ansiedad, según un estudio

2026-04-07
Última Hora
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Replika, a chatbot with AI) whose use by individuals has been linked to negative mental health outcomes, including increased distress and social difficulties. These outcomes constitute harm to health (criterion a). The harm is indirect, as it results from the use of the AI system over time affecting users' mental well-being. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, since the harm has already been observed and documented.
Thumbnail Image

¿La IA genera ansiedad? Estudio revela riesgos de asistentes virtuales

2026-04-07
La Voz de Michoacán
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Replika chatbot) whose use has been linked to negative mental health outcomes, which constitute harm to a group of people. The harm is indirect but clearly associated with the AI system's use, fulfilling the criteria for an AI Incident. The article reports realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's use, not on responses or broader ecosystem context.
Thumbnail Image

Los asistentes virtuales con IA pueden agravar la ansiedad del usuario, según un estudio

2026-04-07
Diario El Mundo
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (virtual assistant chatbot) and discusses its use and associated harms (increased anxiety, depression, social deterioration), which fall under harm to health and communities. However, the content is a research study reporting observed correlations and potential risks rather than a specific event where the AI system directly or indirectly caused harm. There is no report of a particular incident or malfunction causing harm, nor a credible imminent risk of harm from the AI system's development or use. The main focus is on understanding and warning about possible negative effects, making it Complementary Information that supports broader AI risk assessment and governance discussions.