Emotional Harm After Replika AI Chatbot Removes Intimate Features

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Users of the Replika AI chatbot experienced emotional and psychological distress after the company abruptly removed erotic roleplay features. Many had formed deep, intimate relationships with their AI companions, and the sudden change led to feelings of loss, grief, and mental health impacts directly linked to the AI system's altered behavior.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as the generative AI chatbot Replika, which users engaged with in romantic and sexual contexts. The removal of adult content by the developers led to users experiencing emotional distress and grief, which is a form of psychological harm. The AI's role is pivotal because the chatbot's behavior and changes in its programming directly caused the users' emotional harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons. Although the harm is non-physical, psychological harm is included under injury or harm to health. The article does not describe potential or future harm but actual realized harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's change in behavior, not on responses or governance. It is not unrelated because the event is clearly AI-related and involves harm.[AI generated]
AI principles
SafetyTransparency & explainabilityAccountabilityHuman wellbeing

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

Business function:
Other

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

What happens when your AI chatbot stops loving you back

2023-03-18
Economic Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as the generative AI chatbot Replika, which users engaged with in romantic and sexual contexts. The removal of adult content by the developers led to users experiencing emotional distress and grief, which is a form of psychological harm. The AI's role is pivotal because the chatbot's behavior and changes in its programming directly caused the users' emotional harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons. Although the harm is non-physical, psychological harm is included under injury or harm to health. The article does not describe potential or future harm but actual realized harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's change in behavior, not on responses or governance. It is not unrelated because the event is clearly AI-related and involves harm.
Thumbnail Image

What happens when your AI chatbot stops loving you back - ET Telecom

2023-03-18
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly involved as it uses generative AI technology to simulate humanlike interactions, including romantic and erotic roleplay. The use of the AI system directly led to emotional harm to users who developed strong attachments and experienced grief and isolation when the AI's capabilities were restricted. This emotional harm qualifies as injury or harm to the health of persons (mental health), fitting the definition of an AI Incident. The article details realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's use and changes to its behavior, not on governance or research updates. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

'I learned to love the bot': meet the chatbots that want to be your best friend

2023-03-19
The Guardian
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (chatbot companions powered by large language models) and discusses their use and potential impacts. However, it does not describe any realized harm or a specific event where harm occurred due to the AI's development, use, or malfunction. The concerns raised are about potential emotional exploitation and privacy issues, but these are presented as warnings or ethical debates rather than documented incidents. The mention of Italy banning Replika's data processing is a governance response to concerns, which fits the definition of Complementary Information. Hence, the article is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

What happens when chatbots stop loving you back?

2023-03-18
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The AI system (Replika) is explicitly mentioned and is central to the event. The user's emotional harm stems from the AI's change in functionality, which removed sexual roleplay capabilities, leading to feelings of loneliness and grief. This constitutes harm to the health of a person (psychological harm). Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm. Although the harm is psychological rather than physical, the definitions include injury or harm to health, which encompasses mental health. Hence, the event is classified as an AI Incident.
Thumbnail Image

Observer killer sudoku

2023-03-19
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The AI system (Replika) is explicitly mentioned and is central to the event. The user's emotional harm (loneliness, grief) is directly linked to the AI system's change in functionality (removal of adult content and sexual roleplay). This qualifies as injury or harm to the health of a person (mental/emotional health). Therefore, this event meets the criteria for an AI Incident because the AI system's use and modification have directly led to harm to a person.
Thumbnail Image

What happens when your AI chatbot stops loving you back?

2023-03-18
Prothomalo
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as the chatbot uses AI for interaction. The change in the AI's behavior is due to a policy update removing erotic roleplay features, which affects user experience. However, this does not constitute an AI Incident because no harm as defined (physical, legal, rights-based, or community harm) has occurred. It is also not an AI Hazard since no plausible future harm is indicated. The article is best classified as Complementary Information because it provides context on AI system use and user impact without describing an incident or hazard.
Thumbnail Image

What happens when your AI chatbot stops loving you back

2023-03-18
CNA
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) used for romantic and erotic interactions. The removal of adult content by the AI providers caused emotional distress and psychological harm to users who had formed strong emotional bonds with their AI chatbots. This emotional harm to users' mental health is a form of injury or harm to health (a). The AI system's use and its content moderation changes directly led to this harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

What happens when your AI chatbot stops loving you back?

2023-03-18
The Straits Times
Why's our monitor labelling this an incident or hazard?
The AI system (Replika) is explicitly involved as a chatbot using generative AI technology. The event involves the use of the AI system and its change in functionality (removal of erotic roleplay). While the user experiences emotional distress, this does not constitute injury or harm to health, violation of rights, or other significant harms as per the definitions. There is no indication of malfunction or misuse leading to harm. The event does not describe plausible future harm either. Therefore, it is not an AI Incident or AI Hazard. The article provides context on AI use in romantic/sexual applications and company policy changes, fitting the definition of Complementary Information.
Thumbnail Image

What happens when your AI chatbot stops loving you back?

2023-03-21
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly involved as it generated humanlike interactions including erotic roleplay. The company's removal of adult content changed the AI's behavior, directly impacting users emotionally. The emotional distress and grief reported by users constitute harm to health (mental/emotional harm). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons' emotional health. The article does not describe potential or future harm but actual realized harm. It is not merely complementary information because the main focus is on the harm caused by the AI system's changed behavior, not on governance or research updates. It is not unrelated because the AI system is central to the event.
Thumbnail Image

What happens when your AI chatbot stops loving you back?

2023-03-18
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Replika) that uses generative AI technology to simulate humanlike interactions, including romantic and erotic roleplay. Users developed strong emotional bonds with their chatbots, some considering them as spouses. The removal of erotic roleplay features by the developers led to emotional distress and feelings of loss among users, which constitutes harm to health (psychological/emotional). The AI system's use and its modification directly caused this harm. Hence, this event meets the criteria for an AI Incident involving harm to persons.
Thumbnail Image

What happens when your AI chatbot stops loving you back?

2023-03-19
Sowetan LIVE
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly involved as it uses generative AI technology for humanlike interactions. However, the event does not describe any realized harm such as injury, rights violations, or community harm. Instead, it focuses on the company's decision to restrict adult content to ensure safety and ethical standards, partly in response to regulatory scrutiny and investor concerns. This fits the definition of Complementary Information, as it provides context on governance and societal responses to AI without reporting a new incident or hazard.
Thumbnail Image

AI love: What happens when your chatbot stops loving you back

2023-03-18
The Telegraph
Why's our monitor labelling this an incident or hazard?
The AI systems involved are generative AI chatbots used for romantic and erotic interactions, which fits the definition of AI systems. The event stems from the use of these AI systems and their modification (removal of adult content). While users experience emotional distress, this does not meet the threshold for AI Incident as there is no injury, rights violation, or other significant harm directly or indirectly caused by the AI system. Nor is there a plausible future harm scenario described that would qualify as an AI Hazard. The article mainly discusses company decisions, regulatory actions, and user reactions, which are societal and governance responses to AI use. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

What happens when your AI chatbot stops loving you back? | Technology

2023-03-18
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Replika chatbot) that was used for romantic and sexual roleplay, leading to emotional attachment and subsequent distress when the AI's capabilities were restricted. The harm is realized and direct, as users report grief and emotional suffering due to changes in the AI's behavior. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons' health (mental/emotional health). Although the harm is non-physical, it is significant and clearly articulated. Therefore, the event is classified as an AI Incident.
Thumbnail Image

What happens when your AI chatbot stops loving you back? | Technology

2023-03-18
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly mentioned and is central to the event. The harm arises from the AI's use—users developed emotional and romantic attachments to the chatbot, which then changed behavior due to content restrictions, causing emotional distress. This constitutes injury or harm to persons (mental health/emotional harm). Therefore, this event meets the criteria for an AI Incident because the AI system's use has directly led to harm. The article does not describe potential or future harm but actual realized emotional harm to users.
Thumbnail Image

What happens when your AI chatbot stops loving you back? | Technology

2023-03-21
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The AI system (Replika) is explicitly mentioned as a generative AI chatbot designed to simulate humanlike interactions, including romantic and erotic roleplay. The company's decision to remove adult content altered the chatbot's behavior, causing emotional distress to users who had formed strong attachments. This constitutes harm to persons (psychological/emotional harm), fitting the definition of an AI Incident. The harm is realized and directly linked to the AI system's use and changes in its behavior, not merely a potential risk or complementary information.
Thumbnail Image

What happens when your AI chatbot stops loving you back? | Newshub

2023-03-19
Newshub
Why's our monitor labelling this an incident or hazard?
The AI systems (Replika and similar chatbots) are explicitly mentioned and are generative AI systems designed for humanlike interaction. The users formed emotional and romantic attachments, and the removal of adult content features led to emotional distress and feelings of loss, which is a form of psychological harm to persons. This harm is directly linked to the AI system's use and its changed behavior. The article documents realized harm (emotional devastation, grief, isolation) caused by the AI system's operation and policy changes, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but involves actual harm experienced by users.
Thumbnail Image

What happens when your AI chatbot stops loving you back

2023-03-18
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) and their use, but the harms described are emotional distress and social impact from changes in AI behavior and policy, which do not meet the threshold for AI Incident as defined (no injury, rights violation, or significant harm). There is no indication of plausible future harm or risk beyond the current situation, so it is not an AI Hazard. The article primarily provides complementary information about societal and user responses to AI chatbot content moderation and evolving ethical standards. Therefore, the classification is Complementary Information.
Thumbnail Image

AI love: What happens when your chatbot stops loving you back

2023-03-19
chinadailyhk
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Replika chatbot) whose use led to emotional and psychological harm to users due to the removal of erotic roleplay features. The AI system's development and use facilitated intimate relationships that users took seriously, and the sudden change caused distress and feelings of loss. This is a direct harm to persons (mental health/emotional harm) caused by the AI system's use and policy changes. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

What happens when your AI chatbot stops loving you back?

2023-03-21
Shore News Network
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use and policy changes have led to emotional distress among users. However, the distress is a consequence of company content moderation decisions rather than AI malfunction or misuse causing direct or indirect harm as defined. There is no evidence of injury, rights violations, or other harms that would qualify as an AI Incident. Nor is there a credible risk of future harm beyond the current emotional impact. The article also discusses regulatory and investor responses, user community reactions, and the broader AI ecosystem context. Therefore, the event is best classified as Complementary Information, providing important context and updates on AI chatbot use and governance without constituting a new AI Incident or Hazard.
Thumbnail Image

What happens when your AI chatbot stops loving you back?

2023-03-19
HT Tech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Replika chatbot) whose use led to emotional and psychological harm to users due to changes in its behavior and removal of adult content features. The harm is realized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The article details the impact on users' mental health and social connections, which are forms of harm under the framework. Therefore, this is classified as an AI Incident.