
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-driven bots on Telegram are generating fake nude images from user photos, leading to privacy violations, potential blackmail, and fears of escalating honor crimes in Iraq. Tech groups warn of significant social harm, urging the public not to use these bots and to report related blackmail attempts.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (bots using AI to generate fake nude images) whose use has directly led to significant harms including privacy violations, potential blackmail, and social harm (honor crimes). These harms fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The article does not merely warn about potential harm but describes ongoing misuse causing real harm.[AI generated]