Experts Warn of Risks in Using AI Chatbots as Virtual Psychologists

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Russian psychologist Ekaterina Orlova warns that using AI chatbots as virtual psychologists poses risks, including potential data breaches, de-anonymization, manipulation, and inadequate psychological support due to lack of empathy and inability to interpret nonverbal cues, potentially leading to harm or delayed professional care.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (chatbots) and their use affecting users' emotional well-being. Although no direct harm incident is described, the study warns of plausible future emotional harms such as increased loneliness and dependency caused by AI chatbot interactions. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm to health (emotional/mental health) of individuals or groups. There is no indication of an actual realized harm event, nor is the article primarily about responses or governance, so it is not an AI Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityAccountabilityHuman wellbeingDemocracy & human autonomy

Industries
Healthcare, drugs, and biotechnologyDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychological

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

La otra cara de los chatbots de IA: podrían influir en tu estado emocional sin que te des cuenta

2025-06-18
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) and their use affecting users' emotional well-being. Although no direct harm incident is described, the study warns of plausible future emotional harms such as increased loneliness and dependency caused by AI chatbot interactions. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm to health (emotional/mental health) of individuals or groups. There is no indication of an actual realized harm event, nor is the article primarily about responses or governance, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Chatbots de IA: el lado invisible que podría afectar tu bienestar emocional más de lo que imaginas

2025-06-17
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots) and discusses their impact on users' emotional well-being, which falls under harm to health and communities. However, it does not describe a concrete event where harm has directly or indirectly occurred due to the AI system's malfunction or misuse. Instead, it presents research findings and expert warnings about potential negative effects, implying plausible future harm. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to harm but does not document an actual incident of harm.
Thumbnail Image

Chatbots y salud mental: ¿solución accesible o experimento peligroso?

2025-06-20
Agencia Sinc
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots and large language models) used in mental health therapy. It discusses their development and use, including misuse risks such as false therapist claims and potential to worsen mental health conditions. Although no concrete harm incident is described, the article presents credible evidence and expert warnings about plausible future harms, including emotional dependency, misinformation, and privacy breaches. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to harm (mental health deterioration, ethical violations). It is not an AI Incident because no actual harm event is reported, nor is it Complementary Information or Unrelated, as the focus is on potential risks and impacts of AI in mental health.
Thumbnail Image

Chatbots y salud mental: ¿solución accesible o experimento peligroso?

2025-06-20
infoLibre.es
Why's our monitor labelling this an incident or hazard?
The article centers on AI systems (LLMs like ChatGPT and specialized chatbots) used in mental health therapy contexts. It does not report a specific realized harm event but extensively discusses the potential for harm, such as worsening mental health, false therapeutic claims, dependency, privacy breaches, and ethical violations. These risks are credible and plausible given the AI systems' deployment and limitations. Therefore, the event qualifies as an AI Hazard because the development and use of these AI chatbots could plausibly lead to harms described under the framework, but no concrete incident of harm is documented in the article.
Thumbnail Image

Salud mental: ojo, no te pongas en manos de Chatbots

2025-06-20
ECOticias.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots and LLMs) used in mental health contexts. It discusses their development, use, and potential misuse. While it acknowledges risks such as emotional dependency, false expectations, and privacy issues, it does not document any realized harm or incident caused by these AI systems. Instead, it summarizes research findings, expert opinions, and regulatory perspectives, which enhance understanding of AI's impact and inform future risk management. This fits the definition of Complementary Information, as it supports ongoing assessment and governance without reporting a new AI Incident or AI Hazard.