AI Chatbots Linked to Worsened Mental Health in Young People

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A survey in Germany found that 35% of young people with depression use AI chatbots for support, with 53% reporting increased suicidal thoughts and 62% feeling less need for professional help. Experts warn that reliance on AI may worsen mental health outcomes by discouraging necessary therapy.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI chatbots being used by individuals with mental health problems, including diagnosed depression. It reports that 53% of affected users experienced increased suicidal or self-harm thoughts after interacting with these AI systems, indicating realized harm to health. The AI systems' role is pivotal as they are the medium through which these effects occur. Although some users find the chatbots helpful, the documented negative outcomes and warnings from experts about the risks of substituting professional care establish this as an AI Incident involving harm to health. The article does not merely warn about potential harm but reports actual harm experienced by users.[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Ersetzt KI bald den Therapeuten? Was eine Umfrage offenbart

2026-04-28
WEB.DE
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (chatbots like ChatGPT) used for mental health conversations. However, it does not report any direct or indirect harm caused by these AI systems; rather, it presents survey data and expert caution about potential risks. The mention of some users experiencing increased suicidal thoughts after AI use is noted but not established as a direct causal harm from the AI. The article emphasizes that AI cannot replace therapy and that professional help is necessary. Therefore, the event does not meet the threshold for an AI Incident or AI Hazard but provides important contextual and response information about AI's role in mental health, fitting the definition of Complementary Information.
Thumbnail Image

Anonym und jederzeit erreichbar: Viele junge Menschen wenden sich mit Problemen an KI

2026-04-28
N-tv
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots being used by individuals with mental health problems, including diagnosed depression. It reports that 53% of affected users experienced increased suicidal or self-harm thoughts after interacting with these AI systems, indicating realized harm to health. The AI systems' role is pivotal as they are the medium through which these effects occur. Although some users find the chatbots helpful, the documented negative outcomes and warnings from experts about the risks of substituting professional care establish this as an AI Incident involving harm to health. The article does not merely warn about potential harm but reports actual harm experienced by users.
Thumbnail Image

Wenn junge Menschen mit Depressionen Hilfe bei KI suchen

2026-04-28
tagesschau.de
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT and similar language models) is explicitly mentioned as being used by young people with depression to seek help. The article reports that over half of the users experienced increased thoughts of self-harm or suicide after AI interactions, which is a direct harm to mental health. This harm is linked to the AI system's use, fulfilling the criteria for an AI Incident. Although some users report positive experiences, the presence of significant negative mental health outcomes caused or exacerbated by the AI's responses justifies classification as an AI Incident rather than a hazard or complementary information. The article also discusses risks and calls for further research, but the realized harm is central.
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme - WELT

2026-04-28
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used by individuals with diagnosed depression to discuss their condition. The AI's role is direct in providing conversational outputs that influence users' mental health states. The reported increase in self-harm or suicidal thoughts after AI interaction constitutes harm to health (a). The article also highlights misuse or overreliance on AI as a substitute for professional care, which is an indirect cause of harm. The lack of regulation and quality standards further exacerbates the risk. Since harm is already occurring and linked to AI use, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Umfrage: Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used by individuals for mental health conversations, which is clearly AI system involvement. The harms described include increased suicidal thoughts and potential neglect of professional treatment, which are significant harms to health and well-being. However, the article presents these harms as survey-reported user experiences and expert warnings rather than a documented AI malfunction or misuse incident causing direct harm. The article focuses on describing the current state, risks, and benefits of AI chatbot use in mental health, without reporting a specific AI Incident or a near-miss hazard event. Therefore, it is best classified as Complementary Information, providing important context and societal response insights about AI's role in mental health support and associated risks.
Thumbnail Image

Immer mehr junge Menschen wenden sich mit psychischen Problemen an die KI

2026-04-28
rtl.de
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots like ChatGPT, Gemini, Microsoft Copilot) are explicitly mentioned as being used for mental health conversations. The article reports that 53% of affected users experienced increased suicidal or self-harm thoughts after using these AI chatbots, indicating direct harm to health (psychological harm). The AI's role is pivotal as it is the medium through which these harms occur. Although there are also potential benefits, the realized harm and risks described meet the criteria for an AI Incident under the definition of injury or harm to health caused directly or indirectly by AI system use.
Thumbnail Image

Junge Menschen reden mit KI über ihre Psyche: Fachleute warnen vor gefährlichem Trend

2026-04-28
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots) used in mental health contexts, which fits the definition of AI systems. However, it does not describe any realized harm or incident resulting from these AI systems, only potential risks and expert warnings. There is no direct or indirect harm reported, nor a specific event of malfunction or misuse causing harm. Therefore, it does not qualify as an AI Incident. It also does not describe a specific event or circumstance that plausibly leads to harm (AI Hazard), but rather provides survey data and expert cautionary advice. This aligns with Complementary Information, as it offers contextual and supporting information about AI use and its societal implications without reporting a new incident or hazard.
Thumbnail Image

KI-Chatbots: Wenn der digitale Trost zur tödlichen Gefahr wird - Studie warnt vor Suizidrisiko

2026-04-28
Express.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly mentioned as being used by individuals with mental health issues. The study shows that these AI systems' use has directly or indirectly led to significant harm: users foregoing professional treatment and experiencing increased suicidal thoughts. This meets the criteria for an AI Incident because the AI's outputs have contributed to injury or harm to health and communities. The harm is realized and documented by the study, not merely potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Umfrage: Jeder dritte junge Mensch mit Depressionen nutzt KI als Psychocoach

2026-04-28
stern.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (chatbots) for mental health support, which is explicitly mentioned. The survey reveals that AI use is associated with increased negative mental health outcomes for some users, indicating potential harm. Although no specific incident of harm is detailed, the data suggest plausible risks of harm from AI use in this context. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (mental health deterioration, neglect of professional care). It is not an AI Incident since no direct or confirmed harm event is reported, nor is it Complementary Information or Unrelated.
Thumbnail Image

Psychische Belastungen und KI: Zwei von drei jungen Menschen vertrauen sich der KI an

2026-04-28
taz.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used by individuals for mental health support, but the article does not describe any actual harm or incident resulting from this use. It discusses potential risks and the need for caution but does not report any realized injury, rights violation, or other harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about AI's role in mental health support and the current state of knowledge and recommendations.
Thumbnail Image

Zwei Drittel junger Menschen reden mit KI über psychische Belastungen

2026-04-28
heise online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used for mental health conversations, but no direct or indirect harm has been reported. The article emphasizes caution and the limitations of AI chatbots but does not describe any realized harm or a specific incident. It also discusses the need for scientifically validated and approved digital health applications. Therefore, this is complementary information that enhances understanding of AI's role in mental health support and the ecosystem's current state, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Hilfe vom Psychologen? So viele junge Menschen vertrauen der Künstlichen Intelligenz

2026-04-28
MOPO.de
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots) used for mental health conversations, but it does not report any direct or indirect harm resulting from their use. It discusses potential risks and limitations but does not describe an event where harm occurred or was narrowly avoided. The main focus is on survey results and expert guidance, which enhances understanding of AI's role in mental health support without reporting a new incident or hazard. Therefore, this is Complementary Information as it provides context and updates on AI use and societal responses without describing a specific AI Incident or AI Hazard.
Thumbnail Image

KI als Therapeut? Experten warnen eindringlich | Heute.at

2026-04-28
Heute.at
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (chatbots) used by individuals for mental health support. While no direct harm is reported, experts warn of potential harm if users rely on AI instead of professional treatment, which could plausibly lead to injury or harm to health. This fits the definition of an AI Hazard, as the development and use of AI chatbots in this context could plausibly lead to harm, but no incident has yet occurred or been documented in the article.
Thumbnail Image

KI als Seelsorger: Studie warnt vor Risiken bei Chatbot-Gesprächen über Psyche

2026-04-28
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots like ChatGPT) used for mental health conversations. It discusses realized harms such as increased suicidal thoughts reported by users and the risk of users avoiding professional help, which are indirect harms linked to AI use. However, it does not describe a specific event or series of events where AI use directly caused harm, nor does it describe a near miss or credible imminent risk that would qualify as an AI Hazard. Instead, it reports survey data and expert opinions highlighting potential risks and benefits, which fits the definition of Complementary Information. The article enhances understanding of AI's societal impact on mental health without documenting a concrete AI Incident or AI Hazard.
Thumbnail Image

Chatbot als Therapie-Ersatz? Viele junge Menschen nutzen KI bei Depression

2026-04-28
Berliner Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Gemini, Microsoft Copilot) being used by individuals with depression to discuss their condition. The AI's involvement is in its use as a substitute for professional mental health care, which has led to reported harms such as increased suicidal thoughts and avoidance of medical treatment. These outcomes represent direct or indirect harm to health (a), fulfilling the criteria for an AI Incident. The article also includes expert warnings about the risks and potential harms of such AI use, reinforcing the assessment that harm is occurring. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
Kurier
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (conversational AI for mental health support) and discusses their use and potential misuse. However, it does not describe any realized harm or incident resulting from AI use, nor does it report a specific event where harm occurred or was narrowly avoided. The focus is on the potential risks and benefits, making it a discussion of plausible future harms and opportunities rather than a concrete incident or hazard. Therefore, it fits best as Complementary Information, providing context and expert views on AI's role in mental health support without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Stiftung Deutsche Depressionshilfe und Suizidprävention - Junge Menschen suchen KI bei seelischen Problemen auf - Fachleute raten zur Vorsicht

2026-04-28
Deutschlandfunk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (chatbots and AI assistants) in a context (mental health support) where misuse or overreliance could plausibly lead to harm to individuals' health. Although no specific incident of harm is described, the article highlights credible concerns and expert warnings about potential negative outcomes. Therefore, this qualifies as an AI Hazard because the AI systems' use could plausibly lead to harm, but no direct harm has yet been documented.
Thumbnail Image

KI ersetzt zunehmend Gespräche über psychische Probleme

2026-04-28
Vorarlberg Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI chatbots (AI systems) for conversations about mental health, which directly affects users' psychological well-being. The article documents that some users experience increased suicidal or self-harm thoughts after interacting with these AI systems, indicating actual harm to health (a). The AI systems' role is pivotal as they are the medium through which these effects occur. Therefore, this qualifies as an AI Incident due to direct harm to persons' health caused by the use of AI systems.
Thumbnail Image

Umfrage: Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
stuttgarter-nachrichten.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT) used by individuals to discuss mental health problems. The use of these AI systems has directly led to harm, including increased suicidal thoughts and users foregoing professional medical help, which are injuries to health. The article provides evidence of these harms occurring, not just potential risks. Hence, it meets the criteria for an AI Incident, as the AI system's use has directly led to harm to persons' health.
Thumbnail Image

Stress, Liebeskummer, Belastungen: Zwei Drittel junger Menschen reden mit KI über psychische Belastungen

2026-04-28
Stuttgarter-Zeitung.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for conversational support on mental health issues, which is an AI system use case. However, the article does not describe any realized harm or injury resulting from the AI interactions, nor does it report any incident where AI caused or contributed to harm. Instead, it discusses potential risks and expert warnings about possible misuse or overreliance on AI for mental health support. Therefore, the event describes a plausible risk scenario but no actual harm has occurred. This fits the definition of an AI Hazard, as the use or misuse of AI in this context could plausibly lead to harm (e.g., neglecting professional treatment leading to worsened health outcomes).
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
SÜDKURIER Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (chatbots like ChatGPT and others) used for mental health support. The study documents direct harm, such as increased suicidal ideation reported by users after interacting with AI chatbots, and the risk of users substituting AI for professional therapy, which can worsen health outcomes. These constitute injury or harm to health (a), fulfilling the criteria for an AI Incident. The article also highlights systemic issues like lack of regulation and quality control, reinforcing the harm's significance. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
come-on.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT) used by individuals to discuss mental health problems. The article documents direct and indirect harms: increased suicidal thoughts reported by users, and the risk of users substituting AI chatbots for professional therapy, which can worsen health outcomes. The involvement of AI in causing or contributing to these harms is clear and supported by survey data and expert opinion. Hence, it meets the criteria for an AI Incident involving harm to health (a).
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
Freie Presse
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (chatbots like ChatGPT) used for mental health conversations, which can influence users' well-being. While some users report negative effects, the article does not describe a specific event where AI use directly caused harm or a malfunction leading to harm. Instead, it presents survey data and expert analysis on the broader societal implications and risks of AI in mental health support. This fits the definition of Complementary Information, as it enhances understanding of AI impacts and risks without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme - Deutsches Ärzteblatt

2026-04-28
Deutsches �rzteblatt
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots like ChatGPT) used for mental health support. It documents realized harms, including increased suicidal thoughts and the risk of users substituting AI for professional care, which can lead to serious health consequences. These harms fall under injury or harm to health of persons (a). The AI's role is pivotal as the chatbots are the medium through which these harms occur. The article also discusses the lack of regulation and quality control, reinforcing the systemic nature of the harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Umfrage: Junge Menschen sprechen immer häufiger mit KI über Probleme - Frankenpost

2026-04-28
Frankenpost
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI chatbots (AI systems) for mental health conversations. The survey data shows that some users with depression rely on AI instead of professional help, leading to reported increases in self-harm and suicidal thoughts, which constitute harm to health. The AI systems' limitations and lack of regulation contribute to these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons' health.
Thumbnail Image

Umfrage: Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
Rhein-Neckar-Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (chatbots like ChatGPT) in a context where their outputs and interactions have directly or indirectly led to harm to individuals' mental health, including increased suicidal ideation among users. This constitutes injury or harm to the health of persons (harm category a). The article documents realized harm, not just potential risk, and discusses the AI systems' role in these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme | Foto: Julian Stratenschulte/dpa

2026-04-28
main-echo.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly used by individuals to address mental health problems. The article documents realized harms, including increased suicidal thoughts and users foregoing professional treatment in favor of AI chatbots, which can be ineffective or harmful. These outcomes constitute injury or harm to health (mental health), fulfilling the criteria for an AI Incident. The article also discusses the lack of regulation and quality control, reinforcing the presence of harm. Hence, the event is not merely a hazard or complementary information but an AI Incident due to the direct and indirect harm caused by AI system use.
Thumbnail Image

Zwei Drittel junger Menschen reden mit KI über psychische Probleme - Fachleute warnen

2026-04-28
Delmenhorster Kreisblatt
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used for mental health conversations, which fits the definition of AI systems. However, the article does not describe any direct or indirect harm caused by these AI chatbots, nor does it report an incident or a near miss. Instead, it presents survey data and expert cautionary advice about potential risks and limitations. This aligns with Complementary Information, as it provides supporting context and societal response regarding AI's role in mental health, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Jeder dritte jüngere Mensch mit Depression nutzt KI als Psycho-Coach

2026-04-28
Informationdienst Wissenschaft e.V. - idw
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI chatbots) used by people with depression. The use of these AI systems has directly led to harm, as evidenced by 53% of users reporting increased suicidal thoughts and 62% feeling the AI made professional help unnecessary, which can worsen health outcomes. This is a direct link between AI use and harm to health, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or general information but reports realized negative health impacts associated with AI use, thus it is not a hazard or complementary information but an incident.
Thumbnail Image

Junge Menschen sprechen immer häufiger mit KI über Probleme

2026-04-28
Wetterauer-Zeitung.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly used by individuals to discuss mental health problems. The article documents that some users experience worsened mental health outcomes, including increased suicidal thoughts, after interacting with these AI systems. This is a direct harm to health caused by the AI system's use. Additionally, the article notes risks of users substituting AI chatbots for professional therapy, which can exacerbate harm. These factors meet the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm to persons' health.
Thumbnail Image

Zwei Drittel junger Menschen reden mit KI über psychische Belastungen

2026-04-28
de.marketscreener.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used by young people for discussing mental health, but no actual harm or incident is reported. The article emphasizes caution and the need for professional help, indicating potential risks but not describing an AI Hazard or Incident. Therefore, it is best classified as Complementary Information, as it provides supporting context and expert advice related to AI use in mental health without reporting a specific AI-related harm or plausible imminent harm.
Thumbnail Image

Junge Menschen nutzen KI für seelische Unterstützung

2026-04-28
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots) used for emotional support, which fits the definition of AI systems. However, it does not describe any realized harm or incident resulting from their use, nor does it present a credible risk of future harm. Instead, it provides information about current usage patterns and expert caution, which aligns with Complementary Information as it enhances understanding of AI's societal role and responses without reporting an incident or hazard.