TikTok's AI Algorithms Linked to Child Harm and National Security Threats in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Taiwan's government watchdog reports TikTok's AI-driven recommendation algorithms have promoted dangerous challenges, causing child injuries and deaths, and violated children's privacy through unauthorized data collection. The platform is also cited as a tool for Chinese disinformation, posing national security risks. Authorities are criticized for inadequate regulation and response.[AI generated]

Why's our monitor labelling this an incident or hazard?

TikTok is an AI-driven social media platform using recommendation algorithms that influence user behavior and content exposure. The article details realized harms including violations of children's privacy rights, health risks from dangerous content promoted by the AI system, and national security threats from data being stored in China and used for CCP influence operations. These constitute violations of rights and harm to communities and national security, directly linked to the AI system's use and outputs. The government's inadequate regulatory response further exacerbates these harms. Therefore, this event qualifies as an AI Incident due to direct and indirect harms caused by the AI system's use and malfunction in governance.[AI generated]
AI principles
AccountabilitySafetyPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
ChildrenGovernmentGeneral public

Harm types
Physical (injury)Physical (death)Human or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

抖音危害大!監察院批嚴重威脅台灣國安 直指行政院處理懈怠需檢討

2025-08-22
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-driven social media platform using recommendation algorithms that influence user behavior and content exposure. The article details realized harms including violations of children's privacy rights, health risks from dangerous content promoted by the AI system, and national security threats from data being stored in China and used for CCP influence operations. These constitute violations of rights and harm to communities and national security, directly linked to the AI system's use and outputs. The government's inadequate regulatory response further exacerbates these harms. Therefore, this event qualifies as an AI Incident due to direct and indirect harms caused by the AI system's use and malfunction in governance.
Thumbnail Image

38萬小學生用TikTok!監委指「危害兒少、威脅國安」轟政府消極懈怠

2025-08-23
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
TikTok is an AI system employing algorithmic content recommendation and data processing. The article details direct harms caused by TikTok's AI-driven content promotion, including health risks to children and privacy violations, as well as national security threats from data storage and propaganda use. These harms are realized and ongoing, meeting the criteria for an AI Incident. The government's failure to regulate or mitigate these harms further supports this classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

危害國安及兒少!監察院促政府積極檢討TikTok等跨國平台管制 - 政治 - 自由時報電子報

2025-08-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
TikTok's AI-driven recommendation algorithms are explicitly implicated in promoting harmful content to children, resulting in actual injuries and deaths, fulfilling harm to health criteria. Additionally, the platform's use as a tool for national security threats through data collection and content manipulation constitutes violations of rights and harm to communities. The government's failure to regulate or mitigate these harms further connects the AI system's use to the incident. Hence, the event meets the definition of an AI Incident, as the AI system's use has directly and indirectly led to significant harms.
Thumbnail Image

監院調查指TikTok危害兒少、威脅國安 政院:盡速啟動檢討 - 政治 - 自由時報電子報

2025-08-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-powered social media platform that uses AI systems for content recommendation and moderation. The investigation finds that TikTok's use has directly led to harms including negative effects on children's health and learning, deaths linked to dangerous challenges promoted on the platform, and national security risks from data collection and misinformation. These constitute realized harms under the AI Incident definition, involving injury to persons and harm to communities and rights. Hence, the event is classified as an AI Incident.
Thumbnail Image

危害國安及兒少 監院促政府管制TikTok - 政治 - 自由時報電子報

2025-08-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
TikTok employs AI algorithms for content recommendation and moderation, which are central to the harms described: promoting dangerous challenges to children and manipulating information to serve political propaganda. The article details actual harms (children's deaths linked to dangerous challenges, privacy violations, and national security threats from disinformation), indicating realized harm rather than potential risk. Hence, this qualifies as an AI Incident due to the direct or indirect role of AI in causing harm to children and communities, as well as violations of rights and national security concerns.
Thumbnail Image

川普發話「Tiktok國安疑慮被高估」曝合適時機會找習近平討論 | 國際要聞 | 全球 | NOWnews今日新聞

2025-08-22
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-powered social media platform that uses AI for content recommendation and personalization, so an AI system is involved. However, the article primarily covers political and national security concerns and regulatory actions without reporting any actual or imminent harm caused by TikTok's AI system. There is no direct or indirect harm described, nor a credible imminent risk of harm detailed. The article mainly provides context on ongoing political debate and potential future discussions, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

TikTok傷兒少威脅國安 監委促政府課責 | 中共 | 台灣大紀元 | 大紀元

2025-08-22
The Epoch Times
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems, particularly recommendation algorithms, to curate and promote content. The article explicitly states that these algorithms encourage dangerous challenges causing physical harm to children, fulfilling harm to health criteria. Additionally, the collection and storage of user data on servers controlled by a foreign government implicate violations of privacy and national security, constituting harm to communities and rights. The harms are ongoing and documented, not merely potential. Hence, the event meets the definition of an AI Incident, as the AI system's use has directly and indirectly led to significant harms.
Thumbnail Image

TikTok伤儿少威胁国安 监委促政府课责 | 中共 | 台湾大纪元 | 大纪元

2025-08-22
The Epoch Times
Why's our monitor labelling this an incident or hazard?
TikTok employs AI algorithms for content recommendation and user data processing. The article explicitly states that TikTok's AI-driven content promotion has led to serious health harms to children (over 100 deaths and many injuries) and that TikTok is used as a tool for disinformation and data collection threatening national security. These constitute direct harms caused by the AI system's use. The involvement of AI in content curation and data handling is clear, and the harms are materialized, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok內部視頻曝光 員工憂演算法傷害青少年 | TikTok平台 | TikTok被起訴 | 大紀元

2025-08-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that influences content delivery to users. The internal videos and lawsuit indicate that this AI system's design and use have caused harm to teenagers' mental and physical health by promoting addictive and potentially harmful content. This constitutes an AI Incident because the harm is realized and linked to the AI system's use. The event is not merely a potential risk or complementary information but documents actual harm and legal action based on the AI system's impact.
Thumbnail Image

TikTok内部视频曝光 员工忧算法伤害青少年 | TikTok平台 | TikTok被起诉 | 大纪元

2025-08-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The TikTok algorithm is an AI system that recommends content to users based on their behavior. The internal videos and lawsuit indicate that the algorithm's design and use have directly or indirectly led to harm to minors' mental and physical health by promoting addictive usage patterns and harmful content. The harm is realized and documented through employee testimonies and legal action. Hence, this is an AI Incident due to the AI system's role in causing harm to a vulnerable group.
Thumbnail Image

英国《网络安全法》生效后,TikTok转向AI审核引发争议

2025-08-22
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for content moderation and age verification, indicating AI system involvement. However, there is no indication that the AI system has caused any direct or indirect harm yet. The event centers on the company's strategic shift and workforce impact, alongside regulatory evaluation of the AI system's compliance. There is no credible evidence or claim that the AI system's use could plausibly lead to harm in the near future beyond normal operational risks. Hence, it does not meet the criteria for AI Incident or AI Hazard. The main focus is on the operational and regulatory context, making it Complementary Information.
Thumbnail Image

加速推進AI替代人工:英國TikTok或將裁員百人

2025-08-22
RFI
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems for automated content moderation, replacing human moderators. The AI system's role in identifying and removing harmful content is central. While no concrete harm has been reported yet, the union's warnings about potential serious consequences from using immature AI tools indicate a plausible risk of harm to user safety and community well-being. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (e.g., failure to remove harmful content or wrongful removals causing user harm). There is no indication that harm has already occurred, so it is not an AI Incident. The event is more than just general AI-related news or a product update, so it is not Unrelated or Complementary Information.
Thumbnail Image

加速推进AI替代人工:英国TikTok或将裁员百人

2025-08-22
RFI
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for content moderation, which is a clear AI system involvement. The use of AI to replace human moderators is a development and use scenario. While there are concerns about potential safety risks to users, no direct or indirect harm has been reported as having occurred. Therefore, this situation represents a plausible risk of harm due to AI use, fitting the definition of an AI Hazard rather than an AI Incident. The article also discusses regulatory context and company restructuring but does not focus on responses or updates to past incidents, so it is not Complementary Information.
Thumbnail Image

2025-08-23
英国金融时报中文版
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to automate content moderation, which is a clear AI system involvement. However, the article does not describe any realized harm or incident caused by the AI system, nor does it describe a plausible future harm directly resulting from this automation. Instead, it reports a corporate restructuring and workforce reduction due to AI adoption. This is a development in the AI ecosystem and governance but does not constitute an AI Incident or AI Hazard. Therefore, it is best classified as Complementary Information, as it provides context on AI's impact on content moderation and employment but does not report harm or risk of harm.
Thumbnail Image

抖音危害大!監察院批嚴重威脅台灣國安 直指行政院處理懈怠需檢討 | 政治 | 三立新聞網 SETN.COM

2025-08-22
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies TikTok's algorithmic recommendation system as a tool for CCP's cognitive warfare and misinformation campaigns, which harm Taiwan's national security and communities. It also highlights the improper collection and potential misuse of minors' personal data, violating privacy rights. The harms are ongoing and directly linked to the AI system's use and its outputs. The government's failure to adequately regulate or mitigate these harms further supports classification as an AI Incident. The presence of AI systems (content recommendation algorithms), the realized harms (privacy violations, misinformation, threats to national security and minors' health), and the direct causal link justify this classification.
Thumbnail Image

TikTok傷兒少威脅國安 監委促政府課責| 台灣大紀元

2025-08-22
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
TikTok employs AI-based recommendation algorithms that influence user behavior and content exposure. The article details direct harms: children's health risks from dangerous challenges promoted by the platform's algorithm, and national security threats from data collection and misinformation campaigns. These harms are directly linked to the AI system's use and outputs. The involvement of AI in causing injury to children and violations of rights and security meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

監院調查:TikTok威脅兒少健康 相關單位應檢討改善

2025-08-23
國語日報
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that influences content exposure. The article details how this AI-driven push of dangerous challenges and inappropriate content has negatively affected children's health and privacy, fulfilling the criteria for harm to persons and violation of rights. The government's failure to effectively regulate or enforce rules exacerbates the issue. Since harm has occurred and is directly linked to the AI system's use, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

英国《网络安全法》生效后 TikTok转向AI审核引发争议 - cnBeta.COM 移动版

2025-08-22
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for content moderation and age verification, indicating AI system involvement. However, it does not describe any actual harm or incident caused by the AI system's malfunction or misuse. The focus is on the company's operational changes, regulatory compliance, and labor impacts, which are responses to new legal requirements and technological advances. No plausible future harm from the AI system is detailed beyond general concerns about AI replacing human jobs. Hence, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides important complementary information about AI's role in content moderation and regulatory challenges in the UK context.
Thumbnail Image

監院批「抖音淪中共認知作戰工具」!嚴重威脅國安、兒少|壹蘋新聞網

2025-08-22
Nextapple
Why's our monitor labelling this an incident or hazard?
TikTok is an AI system employing recommendation algorithms that influence user content exposure. The article documents realized harms: children's health injuries and deaths linked to dangerous challenges promoted by the platform's AI-driven content push; violations of children's privacy rights through unauthorized data collection; and the platform's role in CCP-led disinformation campaigns threatening national security and societal cohesion. These harms are directly or indirectly caused by the AI system's use and algorithmic operations. The government's failure to adequately regulate or mitigate these harms further underscores the incident nature. Therefore, this event meets the criteria for an AI Incident due to direct and indirect harms caused by the AI system's use.
Thumbnail Image

川普4延TikTok禁令 稱已有美買家 | 聯合新聞網

2025-08-23
UDN
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-driven platform that uses AI systems for content recommendation and data handling. The national security and privacy concerns stem from the AI system's use and data processing, which could plausibly lead to harm such as privacy violations or misuse of user data. However, the article does not report any realized harm or incident caused by TikTok's AI system; rather, it discusses ongoing regulatory and political actions to prevent potential harm. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future risk of harm from the AI system's use if not properly managed.
Thumbnail Image

委前以军成员当主管引忧虑,法米要TikTok解释

2025-08-24
Malaysiakini.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions TikTok's algorithm and its potential influence on content visibility, implying the use of AI systems for content moderation and recommendation. The concern is about possible bias or manipulation of content visibility, which could lead to harm to communities through biased or unbalanced information exposure. However, no actual harm has been reported yet; the concerns are about plausible future harm due to the AI system's use in content moderation. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm related to community impact and content bias.