AI System Targets Cheating in Honor of Kings, Bans 1.27 Million Accounts

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Honor of Kings deployed advanced AI to detect and penalize account boosting (代练) by analyzing detailed player behavior patterns, such as combo timing and movement habits. Since 2025, over 1.27 million accounts have been sanctioned, significantly reducing cheating and enhancing fairness in the Chinese gaming community.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved in detecting cheating behavior by analyzing player input patterns. The AI's use directly addresses violations of fair play, which can be considered a harm to the gaming community (harm to communities). The event reports that the AI system has been used to identify and sanction over 1.27 million accounts engaged in boosting, indicating realized harm and enforcement actions. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to addressing a violation that harms the community's fairness and integrity.[AI generated]
Industries
Arts, entertainment, and recreation

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

攻坚建军百年·新样貌 新作为丨解析训练周表"关键词"之变 - 中国军网

2026-03-10
81.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically unmanned vehicles and drones with autonomous or semi-autonomous capabilities used in military training. The narrative centers on the development and integration of these AI-enabled systems to enhance training effectiveness and combat readiness. There is no mention of any injury, disruption, rights violation, property damage, or other harm caused by these AI systems, nor any plausible risk of such harm occurring. The content is primarily about ongoing improvements, innovations, and organizational responses to incorporate AI technologies effectively. Therefore, it fits the definition of Complementary Information, providing context and updates on AI use in military training without describing an AI Incident or AI Hazard.
Thumbnail Image

《王者荣耀》严打代练:装本人也行不通了 连招节奏、反应速度精准识破

2026-03-09
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in detecting cheating behavior by analyzing player input patterns. The AI's use directly addresses violations of fair play, which can be considered a harm to the gaming community (harm to communities). The event reports that the AI system has been used to identify and sanction over 1.27 million accounts engaged in boosting, indicating realized harm and enforcement actions. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to addressing a violation that harms the community's fairness and integrity.
Thumbnail Image

《王者荣耀》将严打代练行为 AI技术精准识别违规

2026-03-10
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in detecting cheating behavior in the game by analyzing complex player input patterns, which goes beyond simple software rules and involves AI-based pattern recognition. The use of this AI system directly leads to enforcement actions (penalties) against users violating rules, which is a form of harm to the rights of those users (account restrictions or bans) but is justified as rule enforcement to maintain fairness. There is no indication of unintended harm or malfunction; rather, the AI is used as a tool for rule enforcement. This does not constitute an AI Incident (harm caused by malfunction or misuse) or an AI Hazard (potential future harm). Instead, it is a description of the deployment and impact of an AI system in a specific context, providing complementary information about AI use and governance in gaming.
Thumbnail Image

《王者荣耀》严打代练行为,AI技术助力游戏公平性提升!

2026-03-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system explicitly described as analyzing player behavior to detect cheating (代练) in a competitive online game. The AI's role is in the use phase, helping identify and penalize unfair practices that harm the community by undermining fair competition, which is a form of harm to communities and the gaming environment. Since the AI system's use has directly led to actions that reduce this harm, this qualifies as an AI Incident under the framework, as the AI system's involvement directly addresses and mitigates a significant harm (unfairness and cheating) in the gaming community.
Thumbnail Image

AI教育大模型"数据污染"的风险与应对

2026-03-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and systemic hazards posed by AI educational models trained on polluted data, which could plausibly lead to significant harms including misinformation, cognitive distortion, bias reinforcement, and threats to national security. It does not describe a realized harm or incident but rather warns about the plausible future consequences of such data pollution. Therefore, it fits the definition of an AI Hazard, as it highlights credible risks that could lead to AI Incidents if unaddressed. The discussion of mitigation and governance measures further supports this classification as hazard awareness and response rather than reporting an actual incident or complementary information.
Thumbnail Image

换羽添翼,向战而行丨戈壁创客以"头脑风暴"制胜未来战场

2026-03-13
搜狐
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous or semi-autonomous drones used in military training and operations. It discusses the development, use, and maintenance of these AI systems and the challenges faced, including near-miss incidents during training. While the drones' AI systems are critical to operations, no actual injury, damage, or violation of rights is reported. The potential for harm exists given the military application and the mention of counter-drone challenges, but the article does not describe any realized harm. Thus, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm in the future.