
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
OpenAI has introduced an “adult mode” for ChatGPT and relaxed its censorship policies, allowing the AI to generate explicit sexual and violent content when context is provided. Users have already shared erotic scenes on social media. The company also plans to adjust its training to further limit topic restrictions, sparking concerns about abuse and misinformation.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) whose use policies have been changed to allow more explicit content generation. The changes could plausibly lead to harms such as dissemination of abusive sexual content, revenge porn, or privacy violations, as warned by human rights groups. However, no specific incident of harm is reported as having occurred due to these changes. The event thus fits the definition of an AI Hazard, where the development or use of the AI system could plausibly lead to an AI Incident in the future. It is not Complementary Information because the article focuses on the policy change and its implications rather than updates on a past incident. It is not Unrelated because the AI system and its use are central to the event.[AI generated]