Telegram AI Bots Misused for Creating Deepfake Nudes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered chatbots on Telegram are being misused by millions to create nonconsensual deepfake nudes, posing significant risks to privacy and dignity, especially for women and young girls. Despite efforts to curb such misuse, these bots remain easily accessible, leading to potential sextortion and human rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes actual harms caused by AI systems: bots using generative AI to create explicit nonconsensual deepfake images (NCII), affecting millions of users and victims. This constitutes violations of human rights and direct psychological harm, meeting the definition of an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsHuman wellbeingSafetyRobustness & digital securityAccountabilityTransparency & explainabilityFairness

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
WomenChildren

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/Property

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Millions of People Are Using Abusive AI 'Nudify' Bots on Telegram

2024-10-15
Wired
Why's our monitor labelling this an incident or hazard?
The article describes actual harms caused by AI systems: bots using generative AI to create explicit nonconsensual deepfake images (NCII), affecting millions of users and victims. This constitutes violations of human rights and direct psychological harm, meeting the definition of an AI Incident.
Thumbnail Image

Shocking! Telegram AI Bots Can Generate Deepfakes Of Women And Girls: Report

2024-10-17
Zee News
Why's our monitor labelling this an incident or hazard?
AI systems (deepfake bots) are actively being used to produce explicit, non-consensual imagery of women and girls, resulting in privacy violations, potential sexual exploitation, and psychological harm. The harm is realized and widespread, with millions of users engaging with these tools and documented cases of sextortion.
Thumbnail Image

Creeps Flocking To Telegram To Generate Nude Images And Videos With Easily Available AI Bots: Report

2024-10-17
Mashable India
Why's our monitor labelling this an incident or hazard?
The piece describes an ongoing, concrete harm—non-consensual intimate image abuse—directly enabled by AI-powered nudify bots. This constitutes a violation of personal rights and bodily autonomy, with real victims (celebrities and private individuals) and documented usage. Therefore, it is an AI Incident.
Thumbnail Image

'Nudify' Deepfake Bots on Telegram Are Up to 4 Million Monthly Users

2024-10-16
VICE
Why's our monitor labelling this an incident or hazard?
These Telegram bots use AI image generation to create fabricated nude photos and videos of women—including minors—without consent, causing psychological trauma, humiliation, and privacy violations. The AI systems are directly enabling and executing non-consensual deepfake pornography at scale, meeting the definition of an AI Incident.
Thumbnail Image

Millions of People Are Using Abusive AI 'Nudify' Bots on Telegram | W...

2024-10-17
archive.is
Why's our monitor labelling this an incident or hazard?
The described bots use AI (deepfake) to remove clothes from images without consent, directly generating abusive and exploitative pornography—including images of children—constituting a clear harm. This misuse of AI has materialized into ongoing violations of rights and psychological harm, fitting the definition of an AI Incident.
Thumbnail Image

Four million users of Telegram AI can create deepfake nudes of anyone: Report

2024-10-16
Economic Times
Why's our monitor labelling this an incident or hazard?
The article describes how millions of users actively employ AI-driven deepfake tools to create and distribute nude images of real people without consent, resulting in concrete harms such as sextortion, abuse of teenage girls, and violation of personal rights. This is a realized use of AI causing direct harm, qualifying it as an AI Incident.
Thumbnail Image

'Nightmarish scenario': AI-powered chatbots on Telegram allow users to create nudes of anyone, investigation finds

2024-10-16
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The described event involves AI systems (Telegram chatbots using deepfake models) directly causing harm by enabling the creation of non-consensual nude and sexual images of individuals, which constitutes violations of privacy and personal rights. This is a realized harm stemming from misuse of AI, fitting the definition of an AI Incident.
Thumbnail Image

'Nudify' bots to create naked AI images in seconds rampant on...

2024-10-15
New York Post
Why's our monitor labelling this an incident or hazard?
The article describes active misuse of generative AI systems (deepfake bots) to produce nude images of individuals without consent, causing direct harm to victims’ mental health, rights, and personal security. This meets the definition of an AI Incident, as the AI’s use has directly led to violations of personal and human rights and inflicted psychological damage.
Thumbnail Image

'Nightmarish' AI bots on Telegram can create nudes photos of anyone, reveals probe

2024-10-16
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The Telegram bots employ generative AI to produce deepfake nudes of anyone without consent, leading to real-world harms such as sextortion of teenage girls, psychological trauma, and violations of rights. This is a realized instance of AI-driven harm rather than a hypothetical risk.
Thumbnail Image

Four million users of Telegram AI can create deepfake nudes of anyone: Report

2024-10-16
The Economic Times
Why's our monitor labelling this an incident or hazard?
The article describes the real‐world misuse of AI chatbots on Telegram to create explicit deepfake imagery, leading to harassment, sextortion, and mental harm. These harms are directly caused by the AI systems’ outputs and are ongoing. Therefore, it qualifies as an AI Incident.
Thumbnail Image

50 AI Bots With Over 4 Million Users Used To Create Deepfakes Of Women And Girls On Telegram: Report - News18

2024-10-17
News18
Why's our monitor labelling this an incident or hazard?
These Telegram bots are AI systems that generate fake nude images without consent, leading to significant psychological and reputational harm to victims (especially young women and girls) and enabling sextortion. The AI’s development and use have directly resulted in violations of personal and human rights and clear harm to individuals.
Thumbnail Image

People are using AI bots to create nude images of almost anyone online

2024-10-16
BGR
Why's our monitor labelling this an incident or hazard?
The article describes real, active misuse of AI systems—dozens of generative deepfake chatbots on Telegram—that are producing non-consensual nude photos of arbitrary people. This misuse inflicts clear harm (privacy violations, potential reputational and emotional damage) and represents a violation of human rights. Therefore, it meets the criteria for an AI Incident.
Thumbnail Image

Telegram's AI chatbots are capable of creating nudes of anyone: Report

2024-10-16
WION
Why's our monitor labelling this an incident or hazard?
The reported bots explicitly create sexualized deepfakes that remove clothing from photos or fabricate sexual activity. This is a direct misuse of AI leading to violations of personal rights, reputational and psychological harm—meeting the definition of an AI Incident.
Thumbnail Image

Deepfake Bots on Telegram and Privacy Violations

2024-10-17
Analytics Insight
Why's our monitor labelling this an incident or hazard?
Deepfake bots are AI systems that generate synthetic images and videos without consent. The article details realized harms (non-consensual pornography, identity theft, defamation, emotional distress, and disinformation) directly caused by these AI systems. This aligns with the definition of an AI Incident, as the development and use of deepfake bots have directly led to multiple forms of harm.