TikTok/Douyin AI Algorithms Used for Censorship and Propaganda, Raising Global Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok (Douyin) uses AI-driven content moderation and recommendation to censor anti-government content, spread Chinese state propaganda, and promote misinformation. These practices have misled users, violated privacy, and raised national security concerns, with experts and officials warning of the platform's role in information manipulation and potential espionage.[AI generated]

Why's our monitor labelling this an incident or hazard?

TikTok is an AI-powered social media platform that uses algorithmic content recommendation and moderation, which fits the definition of an AI system. The article documents realized harms including censorship (violation of rights), dissemination of propaganda and misinformation (harm to communities), privacy violations (breach of legal protections), and espionage risks (harm to national security). These harms are directly caused or facilitated by the AI system's use and operation, including content filtering and data collection practices. Hence, this is an AI Incident due to direct and indirect harms caused by the AI system's use and misuse.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceFairnessTransparency & explainabilityDemocracy & human autonomyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral publicGovernment

Harm types
Human or fundamental rightsPublic interestPsychological

Severity
AI incident

Business function:
Marketing and advertisementMonitoring and quality control

AI system task:
Organisation/recommendersRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

抖音成中共反美宣传重型武器(下)

2020-05-15
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-powered social media platform that uses algorithmic content recommendation and moderation, which fits the definition of an AI system. The article documents realized harms including censorship (violation of rights), dissemination of propaganda and misinformation (harm to communities), privacy violations (breach of legal protections), and espionage risks (harm to national security). These harms are directly caused or facilitated by the AI system's use and operation, including content filtering and data collection practices. Hence, this is an AI Incident due to direct and indirect harms caused by the AI system's use and misuse.
Thumbnail Image

抖音成中共反美宣传重型武器(下)(组图) - 其它

2020-05-15
看中国
Why's our monitor labelling this an incident or hazard?
TikTok/Douyin is an AI-powered social media platform that uses AI systems for content recommendation, moderation, and data processing. The article documents multiple harms directly linked to the platform's AI-enabled operations: censorship of political content, spreading of propaganda, violation of privacy laws (illegal data collection from children), and potential espionage risks. These constitute violations of rights and harm to communities and national security. The harms are realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident, as the AI system's use and malfunction (censorship, data misuse) have directly and indirectly led to significant harms.
Thumbnail Image

抖音成中共反美宣傳重型武器(下)(組圖)

2020-05-15
看中国
Why's our monitor labelling this an incident or hazard?
TikTok and Douyin employ AI systems for content recommendation, moderation, and data collection. The article documents realized harms including censorship of political content, dissemination of propaganda and misinformation, violation of user privacy (especially children's data), and potential misuse of data for surveillance and intelligence by a foreign government. These constitute violations of human rights and breaches of legal obligations, as well as harm to communities through misinformation and manipulation. The involvement of AI systems in these harms is explicit or reasonably inferred given the platforms' reliance on AI for content curation and moderation. Hence, this is an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

中共如何武器化TikTok、抖音? - 大纪元

2020-06-03
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered platforms TikTok and Douyin, which use AI for content recommendation and moderation. It details how these AI systems are manipulated by CCP policies and laws to censor content, surveil users, and share data with intelligence agencies, leading to violations of human rights (free speech, privacy) and harm to communities (misinformation, propaganda). The harms are ongoing and directly linked to the AI systems' use and control, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中共如何武器化TikTok、抖音? - 大紀元

2020-06-03
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems embedded in TikTok/Douyin for content moderation and surveillance, which are used to censor information and spread propaganda, leading to violations of rights and harm to communities. The forced data sharing with CCP intelligence agencies constitutes a breach of privacy and human rights. The harms described are ongoing and realized, not merely potential. Hence, this qualifies as an AI Incident because the AI systems' use and control by the CCP have directly and indirectly caused significant harms as defined in the framework.
Thumbnail Image

TikTok、抖音如何被中共武器化?(組圖)

2020-06-05
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses TikTok/Douyin's role as a CCP-controlled platform that censors content, spreads propaganda, and submits user data to CCP intelligence agencies, which constitutes violations of rights and harms to communities through misinformation and suppression of free expression. TikTok's content moderation and recommendation systems are AI-driven, making the AI system's use pivotal in causing these harms. The harms are realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident, as the AI system's use directly and indirectly leads to violations of human rights and harm to communities through misinformation and censorship.
Thumbnail Image

TikTok、抖音如何被中共武器化?(组图) - 其它

2020-06-05
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses TikTok and Douyin, which are AI-driven social media platforms using AI for content recommendation, moderation, and censorship. The CCP's control over these platforms leads to direct harms: suppression of free speech, dissemination of false information, violation of privacy rights, and manipulation of public opinion. These harms fall under violations of human rights and harm to communities. The AI systems' role in content filtering, recommendation, and data handling is pivotal to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok成中共武器 年轻人追捧有何危害? - 大纪元

2020-06-07
The Epoch Times
Why's our monitor labelling this an incident or hazard?
TikTok and Douyin are AI-driven social media platforms that use AI systems for content recommendation, moderation, and data processing. The article alleges that these platforms are used by the CCP for surveillance (privacy violations), censorship (restriction of information), and propaganda (ideological manipulation), which constitute violations of human rights and harm to communities. The harms described are ongoing and directly linked to the use of these AI systems. Hence, this qualifies as an AI Incident due to realized harms involving AI system use leading to rights violations and societal harm.
Thumbnail Image

外媒:TikTok已签署欧盟《反虚假信息行为准则》

2020-06-11
Techweb
Why's our monitor labelling this an incident or hazard?
The article describes a governance action where TikTok, a platform that uses AI algorithms for content recommendation and moderation, has committed to an EU code aimed at reducing misinformation. However, the event itself does not describe a specific AI Incident (no realized harm) or AI Hazard (no plausible future harm from AI misuse) but rather a societal/governance response to AI-related challenges. Therefore, it fits the definition of Complementary Information, as it provides context and updates on responses to AI-related misinformation without reporting a new incident or hazard.
Thumbnail Image

TikTok新任CEO凯文·梅耶尔会见欧盟专员 讨论解决社交媒体虚假信息问题

2020-06-10
Techweb
Why's our monitor labelling this an incident or hazard?
While TikTok uses AI systems for content recommendation and moderation, the article does not describe any specific AI incident or hazard causing harm or plausible future harm. The discussion is about policy and cooperation to address misinformation, which is a governance response and thus qualifies as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

TikTok成中共武器 年轻人追捧有何危害?(组图) - 其它

2020-06-09
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly describes TikTok and Douyin as AI-enabled platforms used for surveillance, data collection, and propaganda dissemination, which have directly led to harms including privacy violations, exposure of minors to harmful content, ideological manipulation, and threats to individual and community rights. These harms fall under violations of human rights and harm to communities. The involvement of AI systems in content moderation, data processing, and recommendation algorithms is reasonably inferred. Therefore, this event qualifies as an AI Incident due to the realized harms caused by the use of these AI systems in the described context.
Thumbnail Image

欧盟监管部门成立工作组检查抖音欧洲业务

2020-06-11
早报
Why's our monitor labelling this an incident or hazard?
While TikTok likely uses AI systems for content recommendation and data processing, the article focuses on regulatory review and investigation rather than any specific harm or malfunction caused by AI. There is no indication of direct or indirect harm occurring yet, nor a specific AI-related incident. The event is about potential risks and regulatory responses, fitting the category of Complementary Information as it provides context and updates on governance related to AI systems in TikTok.
Thumbnail Image

外媒:TikTok已签署欧盟《反虚假信息行为准则》

2020-06-11
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves AI-related platforms (TikTok uses AI for content recommendation and moderation), but the article does not describe any specific incident or harm caused by AI systems, nor does it indicate a plausible future harm from AI misuse or malfunction. Instead, it reports a governance response (signing a code of conduct) aimed at mitigating misinformation risks. Therefore, this is Complementary Information as it provides context on societal and governance responses to AI-related challenges without describing a new AI Incident or AI Hazard.
Thumbnail Image

TikTok成中共武器 年輕人追捧有何危害?(組圖)

2020-06-09
看中国
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-driven platform employing recommendation algorithms and content moderation AI systems. The article details realized harms including privacy violations, ideological manipulation, and exposure of minors to harmful content, all linked to TikTok's AI-enabled operations. These constitute violations of rights and harm to communities, fitting the definition of an AI Incident. The article's focus is on actual harms caused by the AI system's use, not just potential risks or general commentary, thus it is not an AI Hazard or Complementary Information. The presence of AI is reasonably inferred from the platform's known AI-based content curation and moderation. Hence, classification as AI Incident is appropriate.