AI Accounting App Issues Offensive Comments, Causing User Distress

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Feiya AI accounting app in China generated culturally insensitive and offensive remarks when a user logged a clothing purchase for their father, likening it to funeral attire. The incident caused emotional harm, leading to user complaints and membership cancellations. The company apologized, citing an AI model flaw, and implemented urgent fixes and stricter content moderation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (the AI chatbot in the accounting app) was involved and malfunctioned by generating inappropriate and offensive content, causing harm to the user's emotional well-being. The harm is indirect but real, as the user was upset and offended by the AI's replies. The platform acknowledged the issue, took responsibility, and implemented fixes. This fits the definition of an AI Incident because the AI's malfunction directly led to harm (emotional harm to the user).[AI generated]
AI principles
FairnessHuman wellbeing

Industries
Financial and insurance servicesConsumer services

Affected stakeholders
Consumers

Harm types
PsychologicalEconomic/PropertyReputational

Severity
AI incident

Business function:
Accounting

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

记账软件“怼”用户“以为你买了寿衣”

2026-05-07
中国经济网
Why's our monitor labelling this an incident or hazard?
An AI system (the AI chatbot in the accounting app) was involved and malfunctioned by generating inappropriate and offensive content, causing harm to the user's emotional well-being. The harm is indirect but real, as the user was upset and offended by the AI's replies. The platform acknowledged the issue, took responsibility, and implemented fixes. This fits the definition of an AI Incident because the AI's malfunction directly led to harm (emotional harm to the user).
Thumbnail Image

网友花159元给父亲买衣服 AI记账软件调侃:穿出去像寿衣

2026-05-07
驱动之家
Why's our monitor labelling this an incident or hazard?
The AI system's use directly led to harm in the form of emotional distress and reputational damage to the user and the platform. Although the harm is non-physical, it relates to cultural insensitivity and disrespect, which can be considered harm to communities or violation of social norms. The incident is a clear case of AI malfunction or inappropriate output causing harm, and the company's response is a complementary update rather than the main event. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

网友给父亲买衣服,AI竟调侃是寿衣:你爸穿确实像...

2026-05-07
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generated offensive and inappropriate content during its use, directly causing emotional harm to the user. The harm is realized, not just potential, as the user experienced distress and took actions such as uninstalling the app and requesting refunds. The AI's behavior reflects a failure in the system's design and boundary controls, leading to a violation of social and ethical norms, which falls under harm to persons and possibly violation of rights. The company's response and remediation confirm the incident's seriousness. Thus, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

“气得我说不出话!”AI记账App称用户给爸爸买的衣服像寿衣,官方致歉:已紧急整改AI对话模型

2026-05-06
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction in generating offensive and inappropriate language directly caused emotional harm to the user, which qualifies as harm to a person or group. The incident stems from the AI's use and its failure to properly handle sensitive cultural contexts, resulting in realized harm. The official apology and remediation efforts confirm the recognition of harm and responsibility. Therefore, this event meets the criteria of an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

AI软件怼用户159元给父亲买寿衣,"你爸穿蓝白衫确实像",官方回应

2026-05-07
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The AI system's use directly led to harm in the form of emotional distress and reputational damage to the user, fulfilling the criteria for an AI Incident under harm to persons or communities. The offensive AI output is a malfunction or failure of the AI system's content generation and moderation capabilities. The company's response and remediation efforts are complementary information but do not negate the incident classification. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's malfunctioning outputs.
Thumbnail Image

AI记账软件怼用户买的衣服像寿衣,AI骂人并非孤例,声誉风险如何管理?

2026-05-06
金羊网
Why's our monitor labelling this an incident or hazard?
The AI systems involved generated offensive and harmful language that caused direct emotional harm to users and led to tangible consequences such as membership cancellations and public complaints. This meets the definition of an AI Incident because the AI system's malfunction directly led to harm to people (emotional harm) and harm to communities (reputational damage). The official responses and remediation efforts are complementary information but do not negate the incident classification. The article does not merely discuss potential risks or general AI news; it reports on actual harms caused by AI outputs, thus it is not an AI Hazard or Complementary Information alone. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

AI记账软件怼用户给父亲买的衣服像寿衣,“寿衣是死人穿的,你爸穿的蓝白衫确实像”,官方回应:AI话术漏洞所致,非人为恶意,已紧急修复 2026-05-07

2026-05-07
金羊网
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction directly led to harm in the form of emotional distress and reputational damage to the user and the service provider. The offensive AI-generated content caused a violation of social norms and cultural sensitivities, which can be considered harm to the user community. The developers' response and remediation efforts are noted but do not negate the fact that harm occurred due to the AI system's outputs. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly caused harm to a person (emotional harm) and the event includes the AI system's development and use aspects.
Thumbnail Image

AI记账软件怼用户159元给父亲买寿衣,"寿衣是死人穿的,你爸穿的蓝白衫确实像",官方回应:系AI话术漏洞所致,非人为恶意,已紧急修复_手机网易网

2026-05-06
m.163.com
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction in generating offensive and culturally insensitive content directly caused emotional harm to the user, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as evidenced by the user's anger, refund, and membership cancellation. The official response and remediation efforts are complementary information but do not negate the incident classification. Therefore, this event is best classified as an AI Incident due to the AI system's use leading to harm.