Douyin Cracks Down on AI-Generated Fraudulent Accounts and Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminal groups exploited AI-generated content (AIGC) on Douyin to create fake personas, spread misleading information, and conduct scams targeting users, including emotional and financial fraud. Douyin responded by banning accounts, removing privileges, and reporting severe cases to authorities, highlighting significant harm caused by AI misuse on the platform.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-generated content (AIGC) to create fake accounts and misleading content, which is used to scam users and spread low-quality or fraudulent information. This constitutes violations of user rights and harms to communities through misinformation and fraud. Since these harms are realized and directly linked to the use of AI systems, this event qualifies as an AI Incident under the framework.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityRobustness & digital securityHuman wellbeingPrivacy & data governance

Industries
Media, social platforms, and marketingDigital securityFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalPublic interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

抖音严打灰黑产"养号","AIGC造假"等六种新型违法违规行径将被严惩

2023-12-20
中关村在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (AIGC) to create fake accounts and misleading content, which is used to scam users and spread low-quality or fraudulent information. This constitutes violations of user rights and harms to communities through misinformation and fraud. Since these harms are realized and directly linked to the use of AI systems, this event qualifies as an AI Incident under the framework.
Thumbnail Image

抖音打击黑灰产通过AIGC造假等违规"涨粉养号"行为

2023-12-22
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems through the use of AIGC to create fake virtual characters and mass-produce content for fraudulent follower growth, which constitutes misuse of AI technology. The harms include violations of platform rules, potential harm to communities (e.g., exposure of minors to inappropriate content), and economic harm through improper monetization. Since these harms are occurring due to the AI system's misuse, this qualifies as an AI Incident. The platform's enforcement actions are responses to this incident but do not change the classification.
Thumbnail Image

抖音严打灰黑产"养号"、AIGC造假等六种新型违法违规行为

2023-12-19
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI (AIGC) to create virtual characters impersonating real people and spreading misleading or fraudulent content, which has led to harms such as scams and misinformation affecting users, including vulnerable groups like minors and the elderly. This meets the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to communities and violations of rights. The platform's active crackdown and reporting to authorities further confirm the materialization of harm rather than just potential risk.
Thumbnail Image

抖音严打灰黑产"养号"、AIGC造假等六种新型违法违规行为

2023-12-19
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AIGC) used to create virtual personas and generate deceptive content that has led to direct harms including fraud, misinformation, and emotional scams. The harms affect individuals and communities, fulfilling the criteria for an AI Incident. The platform's response to ban accounts and report severe cases to law enforcement confirms that these harms are materialized and significant. Hence, this is not merely a potential risk or complementary information but a clear AI Incident involving the use and misuse of AI systems causing harm.
Thumbnail Image

抖音严打灰黑产"养号","AIGC造假"等六种新型违法违规行径将被严惩

2023-12-20
m.tech.china.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (AIGC) to create fake virtual personas and produce misleading content for fraudulent purposes, which directly harms users through scams and misinformation. The AI system's use in these black and gray market activities leads to violations of user rights and community harm. The platform's active enforcement and reporting to authorities confirm that harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

抖音严打灰黑产"养号"、AIGC造假等六种新型违法违规行为

2023-12-19
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (AIGC) to create fake virtual personas and spread fraudulent or misleading content, which has caused harm such as scams and misinformation. This constitutes violations of rights and harm to communities, fitting the definition of an AI Incident. The platform's response is a mitigation effort but does not negate the fact that harm has occurred due to AI system misuse. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

抖音严打黑灰产"养号"、AIGC造假等新型违法违规行为

2023-12-19
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AIGC) used to generate fake virtual characters and content that misleads and scams users, causing harm to individuals and communities. The misuse of AI-generated content to impersonate, defraud, and manipulate users constitutes violations of rights and harms communities, meeting the criteria for an AI Incident. The platform's enforcement actions and reporting to law enforcement further confirm the recognition of realized harm stemming from AI misuse.
Thumbnail Image

抖音严打新型养号行为

2023-12-20
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (AIGC) to create fake accounts and spread harmful content, which has directly led to harms including fraud and exploitation. These harms fall under violations of rights and harm to communities. Since the AI system's misuse has caused actual harm, this qualifies as an AI Incident rather than a hazard or complementary information. The platform's enforcement actions are responses to an ongoing incident rather than the main focus of the article, which centers on the harm caused by AI misuse.
Thumbnail Image

抖音严打灰黑产"养号","AIGC造假"等六种新

2023-12-20
life.3news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (AIGC) to create virtual personas that impersonate real people and spread misleading or fraudulent content, which has led to harms such as scams and deception of users. This constitutes violations of user rights and harms to communities through misinformation and fraud. The platform's enforcement actions indicate that these harms are materialized, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harms, including violations of rights and harm to communities.