NetEase Apologizes for AI-Generated Valentine's Promo Featuring Underage Character

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

NetEase Super Membership used AI-generated promotional text featuring an underage NPC from ‘Yan Yun 16 Sheng’ for Valentine’s Day, sparking player backlash. NetEase apologized, took down the content, and committed to tighter AI content review and collaboration with the game team to prevent future missteps.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI to generate promotional content that misrepresented a minor character, leading to harm in the form of negative player experience and reputational damage. The AI system's use directly led to this harm, as the AI-generated text was the cause of the inappropriate content. Therefore, this qualifies as an AI Incident due to harm caused by AI-generated content misuse.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityRespect of human rightsHuman wellbeingTransparency & explainability

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
ConsumersBusiness

Harm types
ReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

网易就情人节宣发用未成年角色道歉:文案由AI生成

2025-02-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate promotional content that misrepresented a minor character, leading to harm in the form of negative player experience and reputational damage. The AI system's use directly led to this harm, as the AI-generated text was the cause of the inappropriate content. Therefore, this qualifies as an AI Incident due to harm caused by AI-generated content misuse.
Thumbnail Image

网易超级会员使用未成年小女孩宣传引发玩家不满 红线事件发酵

2025-02-18
ZOL游戏频道
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate promotional content that misrepresented a game character, leading to harm in the form of negative player experience and reputational damage. The AI-generated content was directly involved in causing this harm, and the company responded with an apology and content removal. This fits the definition of an AI Incident because the AI system's use directly led to harm (player dissatisfaction and reputational harm).
Thumbnail Image

网易就情人节宣发用未成年角色道歉:红线情人节文案由AI生成

2025-02-20
chinaz.com
Why's our monitor labelling this an incident or hazard?
The AI system generated promotional content that used an underage character in a way that was considered inappropriate by the community, leading to significant negative reactions and an official apology. While the AI's role in generating the content is clear, the harm is limited to reputational damage and user dissatisfaction rather than direct or indirect physical, legal, or systemic harm. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard but rather constitutes Complementary Information about the consequences and responses related to AI-generated content misuse.
Thumbnail Image

网易就情人节宣发用未成年角色道歉:文案由AI生成

2025-02-17
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate promotional text that misrepresented a minor character, leading to harm in the form of negative community impact and violation of user expectations. The harm is indirect but real, as the AI-generated content caused offense and a poor user experience. This fits the definition of an AI Incident because the AI system's use directly led to harm to the community (harm to community experience and trust). The company's response is complementary information but does not change the classification of the original event as an AI Incident.
Thumbnail Image

网易就情人节宣发用未成年角色道歉网易称红线情人节文案由AI生成

2025-02-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating promotional content that included an underage character in a problematic way, leading to public dissatisfaction and an official apology. However, the incident does not describe direct or indirect physical harm, violation of rights, or other significant harms as defined for an AI Incident. The main issue is reputational and ethical concerns about content appropriateness. Therefore, this event is best classified as Complementary Information, as it provides an update on the use and consequences of AI-generated content and the company's response, rather than describing a direct AI Incident or plausible future hazard.