AI-Powered Telegram Bots Generate Fake Nude Images, Fueling Blackmail and Honor Crime Fears in Iraq

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-driven bots on Telegram are generating fake nude images from user photos, leading to privacy violations, potential blackmail, and fears of escalating honor crimes in Iraq. Tech groups warn of significant social harm, urging the public not to use these bots and to report related blackmail attempts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (bots using AI to generate fake nude images) whose use has directly led to significant harms including privacy violations, potential blackmail, and social harm (honor crimes). These harms fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The article does not merely warn about potential harm but describes ongoing misuse causing real harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityRobustness & digital securityTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
WomenGeneral public

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/PropertyPublic interestPhysical (injury)

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

"بوت" يُزيل الملابس عن الصور!.. تحذير من توسع "جرائم الشرف" في العراق

2023-08-19
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (bots using AI to generate fake nude images) whose use has directly led to significant harms including privacy violations, potential blackmail, and social harm (honor crimes). These harms fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The article does not merely warn about potential harm but describes ongoing misuse causing real harm.
Thumbnail Image

يظهرك عارياً بدقة عالية.. تحذير للعراقيين من تطبيق قد يفاقم "جرائم الشرف"- عاجل " وكالة بغداد اليوم الاخبارية

2023-08-18
baghdadtoday.news
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-powered bots using generative AI to create fake nude images) whose use has directly led to harm in the form of privacy violations, potential blackmail, and social harm including exacerbation of honor crimes. The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

تطبيق خطير قد يفاقم جرائم الشرف في العراق

2023-08-18
جريدة المدى
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (bots using AI to generate fake nude images) whose use has directly led to harm in the form of social and psychological damage, privacy violations, and potential escalation to honor crimes. The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities caused by the AI-generated fake images.
Thumbnail Image

"بوت" يُزيل الملابس عن الصور!.. تحذير من توسع "جرائم الشرف" في العراق

2023-08-19
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (bots) that generate manipulated images causing direct harm to individuals by enabling electronic blackmail and potentially triggering honor crimes, which are serious violations of rights and harm to communities. The harms are realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harms as defined in the framework.
Thumbnail Image

"بوت" يُزيل الملابس عن الصور!.. تحذير من توسع "جرائم الشرف" في العراق - الأخبار

2023-08-19
http://akhbaar.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (bots using AI to generate fake nude images) whose use has directly led to harm by violating individuals' privacy and dignity, potentially inciting social harm and violence. This fits the definition of an AI Incident because the AI's use has directly caused harm to persons and communities, including violations of rights and social harm. Therefore, the classification is AI Incident.