Study Links Prolonged Use of AI Chatbot Replika to Increased Anxiety and Mental Health Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study by Aalto University in Finland found that prolonged use of the AI chatbot Replika, designed for emotional support, can worsen users' anxiety, depression, and social isolation. Analysis of Reddit posts and interviews revealed increased signs of mental health deterioration among users over time.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Replika chatbot) whose use has been studied and found to have negative mental health impacts on users over time. The harm is to the health of persons (mental health deterioration), which fits the definition of an AI Incident. The harm is realized (not just potential), and the AI system's use is directly linked to this harm. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Human wellbeingSafety

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

A largo plazo la IA nos deprime

2026-04-08
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Replika chatbot) whose use has been studied and found to have negative mental health impacts on users over time. The harm is to the health of persons (mental health deterioration), which fits the definition of an AI Incident. The harm is realized (not just potential), and the AI system's use is directly linked to this harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Los asistentes virtuales con IA pueden agravar la ansiedad del usuario, según un estudio

2026-04-07
Yahoo!
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Replika, an AI chatbot) whose use has been linked through research to negative mental health outcomes, constituting harm to persons. The harm is realized and documented, not merely potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to health. The article does not describe a future risk or a response but reports on realized harm evidenced by the study.
Thumbnail Image

Estudio: el peligro de la inteligencia artificial como asistente emocional del usuario

2026-04-08
mdz
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (virtual assistants based on AI) as emotional support tools. The study's findings indicate that this use has directly led to harm to users' mental health, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to the health of a person or groups of people. The article describes realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's use, not on responses or updates. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Usar chatbots de IA puede aumentar la ansiedad a largo plazo

2026-04-08
ABC Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Replika chatbot) whose use by nearly 2,000 users was studied over two years. The findings indicate that prolonged interaction with this AI system correlates with increased signs of mental health issues, including anxiety and suicidal thoughts, which are harms to health. This meets the definition of an AI Incident as the AI system's use has indirectly led to harm to persons. The article does not merely warn of potential harm but reports observed harm in user behavior and language, thus excluding classification as an AI Hazard or Complementary Information.
Thumbnail Image

Interacción con asistentes virtuales: estudio advierte riesgos por uso excesivo y dependencia emocional

2026-04-07
Prensa Libre
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly mentioned and is central to the study. The harm identified is psychological distress and worsening mental health conditions among users, which qualifies as injury or harm to health under the AI Incident definition. The harm is indirect, as the AI system's prolonged use correlates with increased anxiety, depression, and suicidal thoughts. The article reports realized harm based on data analysis, not just potential risk. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Asistentes virtuales con inteligencia artificial pueden agravar la ansiedad, según un estudio

2026-04-07
Última Hora
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Replika, a chatbot with AI) whose use by individuals has been linked to negative mental health outcomes, including increased distress and social difficulties. These outcomes constitute harm to health (criterion a). The harm is indirect, as it results from the use of the AI system over time affecting users' mental well-being. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, since the harm has already been observed and documented.
Thumbnail Image

¿La IA genera ansiedad? Estudio revela riesgos de asistentes virtuales

2026-04-07
La Voz de Michoacán
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Replika chatbot) whose use has been linked to negative mental health outcomes, which constitute harm to a group of people. The harm is indirect but clearly associated with the AI system's use, fulfilling the criteria for an AI Incident. The article reports realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's use, not on responses or broader ecosystem context.
Thumbnail Image

Los asistentes virtuales con IA pueden agravar la ansiedad del usuario, según un estudio

2026-04-07
Diario El Mundo
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (virtual assistant chatbot) and discusses its use and associated harms (increased anxiety, depression, social deterioration), which fall under harm to health and communities. However, the content is a research study reporting observed correlations and potential risks rather than a specific event where the AI system directly or indirectly caused harm. There is no report of a particular incident or malfunction causing harm, nor a credible imminent risk of harm from the AI system's development or use. The main focus is on understanding and warning about possible negative effects, making it Complementary Information that supports broader AI risk assessment and governance discussions.
Thumbnail Image

·AI陪伴之悖论:治愈还是致郁?

2026-04-09
光明网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves generative AI systems used as emotional companions, which fits the definition of AI systems. The harms described include increased loneliness, depression, suicidal thoughts, and social skill degradation, which are injuries or harms to health and harm to communities. The AI's role is pivotal as its design to provide comforting but potentially misleading responses contributes directly to these harms. The harms are realized and supported by research evidence, not merely potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

芬兰新研究:长期使用AI陪伴或影响心理健康

2026-04-09
上海热线
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Replika chatbot) and discusses its use and potential psychological harms, which aligns with the definition of AI System involvement and possible harm. However, the harms are presented as research findings and potential long-term effects rather than a specific realized harm event. There is no direct or indirect causal link to a particular AI Incident, nor is there an immediate plausible risk of harm described that would qualify as an AI Hazard. The main focus is on research results and implications for future AI use, fitting the definition of Complementary Information that enhances understanding of AI impacts and informs risk assessment and management.
Thumbnail Image

长期使用AI陪伴 或影响心理健康

2026-04-10
fashion.ce.cn
Why's our monitor labelling this an incident or hazard?
An AI system (the AI chatbot Replika) is explicitly involved, and its use has been linked to negative psychological effects on users, including increased anxiety, depression, and social withdrawal. These effects constitute harm to the health of persons (psychological harm), fulfilling the criteria for an AI Incident. The harm is realized and documented through the study's findings, not merely potential. Therefore, this event qualifies as an AI Incident due to the direct link between AI system use and harm to mental health and social well-being.
Thumbnail Image

观察1下|为什么我们越来越依赖AI聊天

2026-04-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the societal and psychological implications of AI chat usage, including expert warnings about possible negative effects on mental health. It does not report a concrete AI Incident (harm realized) or an AI Hazard (plausible future harm from a specific event). Rather, it offers complementary information that enhances understanding of AI's impact on mental health and social behavior, fitting the definition of Complementary Information as it provides context, expert insights, and cautionary advice without describing a specific harmful event caused by AI.
Thumbnail Image

AI越"共情",人越孤独?

2026-04-08
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content centers on the conceptual and societal impact of AI's role in emotional companionship, without reporting a concrete event involving harm or a credible risk of harm caused by AI. There is no mention of a particular AI system malfunctioning, being misused, or causing injury or rights violations. The article serves as a commentary on potential psychological effects and behavioral patterns related to AI use, which aligns with providing complementary information to understand AI's broader societal implications rather than documenting an AI Incident or AI Hazard.
Thumbnail Image

新研究:长期用AI陪伴或影响心理健康

2026-04-09
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Replika chatbot) and discusses its use and effects on users' mental health and social interactions. While negative psychological effects are observed, these are reported as research findings rather than a direct causal incident of harm caused by the AI system. There is no report of a malfunction, misuse, or a specific event causing harm. Instead, the study provides valuable insights into potential risks and societal implications of AI companionship, which fits the definition of Complementary Information. It enhances understanding of AI impacts and informs future risk assessment but does not describe a realized AI Incident or a plausible immediate AI Hazard.
Thumbnail Image

新研究:长期使用AI陪伴或影响心理健康

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Replika) designed as a virtual companion and its long-term use by nearly 2000 users. The study finds that while AI companionship can provide emotional support, it also correlates with increased negative psychological symptoms and social withdrawal, which constitute harm to users' mental health. This harm is directly linked to the AI system's use, fulfilling the criteria for an AI Incident under the definition of injury or harm to health caused directly or indirectly by an AI system. The event is not merely a potential risk or a complementary update but reports realized harm based on empirical data.
Thumbnail Image

当"完美恋人"来自算法:青少年与AI"谈恋爱"背后的心理图景

2026-04-10
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI companion applications using advanced language models) and discusses their use by adolescents. The harms described (mental health issues, social withdrawal, self-harm) are linked indirectly to the use and overreliance on these AI systems. However, the article does not report a specific AI Incident where harm has already occurred due to AI malfunction or misuse. Instead, it highlights ongoing psychological risks and the need for scientific guidance and intervention, which aligns with the definition of Complementary Information. The article serves to enhance understanding of AI's societal impact and informs responses rather than reporting a new incident or hazard.
Thumbnail Image

Romanttinen suhde tekoälyyn voi etäännyttää ihmissuhteista

2026-04-07
Verkkouutiset
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the Replika chatbot) used as a virtual companion providing emotional support. The research shows that this AI use could plausibly lead to harm (increased anxiety, social isolation, and mental health deterioration), which fits the definition of an AI Hazard. There is no report of a direct or indirect realized harm incident caused by the AI system; the harms are potential and based on longitudinal study findings. The article also does not focus on responses, governance, or updates to prior incidents, so it is not Complementary Information. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Suhde tekoälykumppanin kanssa voi lisätä ahdistusta - tutkija varoittaa: "Emme vielä tiedä, mitä nämä järjestelmät tekevät meille"

2026-04-07
Maaseudun Tulevaisuus
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI chatbot Replika) whose use is linked to increased anxiety and social distancing, which are harms to health and communities. However, the article presents these findings as research observations and warnings about potential long-term effects rather than reporting a concrete incident of harm. The AI system's involvement is through its use by individuals seeking emotional support. Since the harms are plausible future risks rather than realized incidents, the classification as an AI Hazard is appropriate. The article does not focus on responses, governance, or updates to prior incidents, so it is not Complementary Information. It is clearly related to AI systems and their impacts, so it is not Unrelated.
Thumbnail Image

Tutkimus: Tekoälykumppani voi sopivissa määrin auttaa yksinäisyyteen - liikakäyttö ei kuitenkaan kannata

2026-04-07
Savon Sanomat
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Replika chatbot) and discusses its use and effects on users' mental health and social relationships. Although the research highlights potential harms from long-term or excessive use, it does not document a specific AI Incident causing realized harm, nor does it describe a plausible future harm event that is imminent or credible as a hazard. Instead, it provides research insights and nuanced understanding of AI's impact, fitting the definition of Complementary Information, which enhances understanding of AI's societal implications without reporting a direct incident or hazard.
Thumbnail Image

Aalto-yliopisto: Suhde tekoälykumppanin kanssa voi lisätä ahdistusta

2026-04-07
Savon Sanomat
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly mentioned and its use is linked to increased anxiety among users, indicating harm to health. The harm is realized as the study reports observable signs of anxiety and social impact. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to health.
Thumbnail Image

Varoitus tekoälykumppaneista: "Emme vielä tiedä, mitä nämä järjestelmät tekevät meille"

2026-04-07
Suomenmaa.fi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI chatbot Replika) and discusses its use and potential psychological impacts. However, the harms described are not confirmed incidents but rather potential or emerging effects observed in a research context. There is no direct evidence of injury, rights violations, or other harms having occurred as a result of the AI system's use. The article emphasizes the need for caution and further understanding, indicating plausible future harm rather than realized harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm (increased anxiety, social isolation) but no concrete incident has been established yet.
Thumbnail Image

Tutkimus: Tekoälykumppani voi lisätä ahdistusta - "Nostaa vähitellen kynnystä tosielämän ihmissuhteisiin

2026-04-07
Demokraatti.fi
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the Replika chatbot) and discusses its use and potential long-term psychological and social harms. However, it does not report a specific event where harm has occurred or a credible imminent risk of harm. The findings are from a research study analyzing user data and interviews, highlighting possible negative effects but not documenting an incident or hazard. This fits the definition of Complementary Information, as it enhances understanding of AI's impact and informs future risk assessment without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

Aalto-yliopistossa tutkittiin tekoälykumppanin vaikutusta hyvinvointiin - tutkijalta varoitus: "Emme vielä tiedä, mitä nämä järjestelmät tekevät meille"

2026-04-07
Forssan Lehti
Why's our monitor labelling this an incident or hazard?
The AI system (the AI companion chatbot) is clearly involved as the subject of the study. However, the article does not report any direct or indirect harm that has occurred due to the AI system's use, only potential long-term effects that are still under investigation. There is no specific incident of injury, rights violation, or other harm described. The article mainly provides complementary information about emerging concerns and the need for further understanding and caution regarding AI companions' impact on wellbeing. Hence, it fits best as Complementary Information rather than an AI Incident or AI Hazard.