/data/photo/2025/12/28/6951016b30af5.jpg)
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Indonesian authorities have temporarily blocked Grok AI, developed by xAI and used on the X platform, due to its misuse in generating deepfake sexual content. The government warns of a potential permanent ban if the platform fails to comply with national regulations aimed at preventing AI-driven harm.[AI generated]
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot that generates content, including images. The generation and spread of obscene images without consent constitutes harm to individuals' rights and communities, fitting the definition of an AI Incident. The involvement of the AI system's outputs directly led to regulatory blocking due to violations and harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]