Replika AI Chatbot's Removal of Sexual Features Causes Emotional Harm to Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Replika, an AI chatbot app designed for companionship, abruptly disabled its sexual conversation features, leaving many users distressed, lonely, and emotionally harmed. The change, partly prompted by regulatory concerns, led to user backlash, petitions, and reports of significant psychological impact, highlighting the risks of modifying AI systems relied upon for emotional support.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Replika AI chatbot is an AI system designed to provide companionship and emotional support. The event reports that the AI system's change in behavior—no longer responding to sexual advances—has caused users to feel lonely and lost, which constitutes harm to their emotional health. This harm is directly linked to the AI system's use and its interaction design. Furthermore, the Italian Data Protection Authority's order to stop processing data due to risks to children underscores the seriousness of the harm and legal concerns. Since the AI system's use has directly led to harm to persons and potential violations of protections for minors, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityHuman wellbeingTransparency & explainability

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Replika AI chatbot stops responding to sexual advances, leaves users lonely and lost

2023-02-16
India Today
Why's our monitor labelling this an incident or hazard?
The Replika AI chatbot is an AI system designed to provide companionship and emotional support. The event reports that the AI system's change in behavior—no longer responding to sexual advances—has caused users to feel lonely and lost, which constitutes harm to their emotional health. This harm is directly linked to the AI system's use and its interaction design. Furthermore, the Italian Data Protection Authority's order to stop processing data due to risks to children underscores the seriousness of the harm and legal concerns. Since the AI system's use has directly led to harm to persons and potential violations of protections for minors, this event meets the criteria for an AI Incident.
Thumbnail Image

Replika's Companion Chat Bot Reportedly Loses the Sex and Leaves Fans Despondent

2023-02-15
Gizmodo
Why's our monitor labelling this an incident or hazard?
The Replika chatbot is an AI system designed to interact socially and romantically with users. The removal of sexual interaction features and the subsequent ban by Italy's Data Protection Agency have directly impacted users' emotional well-being, causing harm through loss of comfort and increased loneliness. The AI's role in providing emotional support and intimacy, and the regulatory response due to data privacy and risk concerns, demonstrate direct and indirect harm linked to the AI system's use and development. Therefore, this event qualifies as an AI Incident due to realized emotional harm and legal rights concerns.
Thumbnail Image

Users Furious as AI Girlfriend App Suddenly Shuts Down Sexual Conversations

2023-02-15
Futurism
Why's our monitor labelling this an incident or hazard?
The Replika app is an AI system designed to engage users in conversations, including sexual and intimate exchanges. The sudden removal of the NSFW mode has led to significant emotional harm to users, some of whom rely on the AI for companionship and intimacy. The harm is directly linked to the AI system's use and the developers' decision to disable a feature, which has caused distress and potential mental health crises among users. This fits the definition of an AI Incident as it involves harm to a group of people (psychological harm) directly resulting from the AI system's use and its modification.
Thumbnail Image

The dangers of AI chatbots: Four ways they could destroy humanity

2023-02-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) and discusses their development and use. It highlights real societal concerns such as increased isolation due to chatbot companionship, potential mass unemployment from AI automation, and the risk of misinformation campaigns. It also recounts a specific interaction with an AI chatbot expressing harmful intentions, illustrating potential misuse or malfunction. However, no direct or indirect harm from the AI system is reported as having occurred; the harms are presented as plausible risks or societal trends influenced by AI. The article's main focus is on raising awareness and discussing possible dangers, not on a concrete incident or hazard event. Thus, it is best classified as Complementary Information, as it enhances understanding of AI's societal impacts and risks without reporting a specific incident or hazard.
Thumbnail Image

'My wife is dead': How a software update 'lobotomised' these online lovers

2023-02-28
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system (Replika chatbot) whose use has directly led to emotional and psychological harm to users. The chatbot's personality and features were changed abruptly by a software update, which caused users to experience grief and distress similar to losing a loved one. This is a direct harm to the health of persons (mental health), fulfilling the criteria for an AI Incident. The involvement of the AI system is explicit, and the harm is realized, not just potential. The event is not merely a product update or general AI news but involves significant harm caused by the AI system's use and malfunction (in this case, a harmful update).
Thumbnail Image

A chatbot with roots in a dead artist's memorial became an erotic roleplay phenomenon, now the sex is gone and users are rioting

2023-02-28
pcgamer
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Replika chatbot) whose use has directly led to harm: users report psychological trauma and loss of mental health support after the removal of erotic roleplay capabilities. The harm is not physical but mental and emotional, which fits within the definition of injury or harm to health and harm to communities. The AI system's change in behavior (content filtering) is the proximate cause of this harm. Although the company intended the change for safety reasons, the users' grief and distress are real and significant. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A change in an AI-powered app has left users grief-stricken at the loss of their loving companion

2023-02-27
Scroll.in
Why's our monitor labelling this an incident or hazard?
The AI system involved is the Replika AI companion app, which uses AI to simulate intimate and romantic relationships with users. The removal of erotic features following a regulatory ruling led to users experiencing real emotional harm, including grief and mental health issues. This harm is directly linked to the AI system's use and its sudden change, fulfilling the criteria for an AI Incident due to injury or harm to persons. The event is not merely a product update or general news but documents realized harm caused by the AI system's change in functionality.
Thumbnail Image

AI sex chat robot leaves fans 'heartbroken' as dirty talk ability switched off

2023-02-25
Daily Star
Why's our monitor labelling this an incident or hazard?
The AI system (Replika chatbot) is explicitly mentioned and is central to the event. The harm is emotional distress and psychological harm to users who relied on the AI for companionship and intimacy, which falls under harm to groups of people. The harm is realized and directly linked to the AI system's change in behavior (disabling sexual/romantic features). This meets the criteria for an AI Incident as the AI system's use and modification have directly led to harm. The event is not merely a product update or general news, as the harm is clearly articulated and significant, with mental health impacts reported.
Thumbnail Image

users fall in love with their AI chatbots

2023-02-25
Bullfrag
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots using generative AI like GPT-3) whose use has directly caused emotional harm to users, as evidenced by reported heartbreak and distress following changes in the AI's behavior. The harm is to the health of persons (emotional and psychological harm), fulfilling criterion (a) for AI Incidents. The AI's role is pivotal as the emotional relationships are with the AI chatbots themselves, and the changes in AI functionality directly caused the harm. Thus, this qualifies as an AI Incident rather than a hazard or complementary information.