
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Valve is developing SteamGPT, an internal AI tool designed to automate moderation and analyze cheating reports on Steam, including games like Counter-Strike 2. While not yet deployed, the system could impact user management and risk wrongful bans or privacy issues if misused or malfunctioning.[AI generated]
Why's our monitor labelling this an incident or hazard?
An AI system (SteamGPT) is explicitly described as being developed for moderation and anti-cheat purposes, involving analysis and summarization of player behavior and reports. While the AI is not confirmed to be in production or causing harm, the potential for false positives and wrongful bans is acknowledged, indicating plausible future harm. The event does not describe realized harm or a response to past harm, so it is not an AI Incident or Complementary Information. The presence of an AI system with potential to cause harm in the future fits the definition of an AI Hazard.[AI generated]