
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Roblox has launched Sentinel, an AI system that analyzes billions of chat messages in real time to detect and report potential child predatory behavior and exploitation. In 2025, Sentinel facilitated 1,200 reports of possible child exploitation, aiming to enhance safety for its predominantly young user base.[AI generated]
Why's our monitor labelling this an incident or hazard?
Roblox Sentinel is an AI system that analyzes chat messages in real time to identify potentially harmful content related to child exploitation. The system's use has directly led to the detection and reporting of numerous cases of potential exploitation, which is a form of harm to children (harm to health and safety). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm prevention and intervention in cases of child exploitation. The article also mentions ongoing challenges but does not negate the realized impact of the AI system in detecting harm.[AI generated]