Singapore Regulator Warns X and TikTok Over AI Failures in Detecting Harmful Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Singapore's Infocomm Media Development Authority (IMDA) issued letters of caution and placed X and TikTok under enhanced supervision after their AI-based systems failed to proactively detect and remove child sexual exploitation and terrorism content. Both platforms must implement improvements or face potential regulatory action.[AI generated]

Why's our monitor labelling this an incident or hazard?

The platforms' content moderation systems likely rely on AI to detect harmful content. The failure of these AI systems to accurately identify and remove child sexual exploitation and abuse material and terrorism content has resulted in the dissemination of such harmful content, which constitutes harm to communities and individuals. This meets the criteria for an AI Incident because the AI system's malfunction or inadequate performance has directly led to harm. The article details realized harm and regulatory actions taken in response, confirming the incident status rather than a mere hazard or complementary information.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenGeneral public

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

TikTok, X issued letters after failing to detect and remove child sexual abuse, terrorism content: IMDA

2026-03-31
The Straits Times
Why's our monitor labelling this an incident or hazard?
The platforms' content moderation systems likely rely on AI to detect harmful content. The failure of these AI systems to accurately identify and remove child sexual exploitation and abuse material and terrorism content has resulted in the dissemination of such harmful content, which constitutes harm to communities and individuals. This meets the criteria for an AI Incident because the AI system's malfunction or inadequate performance has directly led to harm. The article details realized harm and regulatory actions taken in response, confirming the incident status rather than a mere hazard or complementary information.
Thumbnail Image

X and TikTok issued letters of caution by IMDA for serious weaknesses in detection, removal of harmful content

2026-03-31
CNA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in content detection and removal systems on X and TikTok. The regulator found serious weaknesses in these AI systems' ability to detect and remove harmful content such as child sexual exploitation material and terrorism-related content, which are clear harms to individuals and communities. The platforms' failure to effectively address these issues means the AI systems' malfunction or inadequate use has directly or indirectly led to harm. Hence, this is an AI Incident rather than a hazard or complementary information, as harm is ongoing and recognized by the regulator.
Thumbnail Image

Singapore warns X and TikTok over failures to detect child sexual and terrorism content, places both under enhanced supervision

2026-03-31
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in automated detection systems by X and TikTok to identify harmful content. The failure of these AI systems to adequately detect and remove child sexual exploitation and terrorism content has resulted in increased harmful content exposure, which is a clear harm to communities and vulnerable groups (children). The regulatory response and enhanced supervision underscore the seriousness of the harm caused. Since harm has already occurred due to the AI systems' insufficient performance, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AsiaOne

2026-03-31
AsiaOne
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in automated detection systems for harmful content on social media platforms. The failure of these AI systems to proactively detect and remove egregiously harmful content such as CSEM and terrorism-related videos has directly led to harm, including violations of human rights and harm to communities. The regulator's intervention and the platforms' commitment to improve their AI systems confirm the AI system's involvement in the harm. Hence, this event meets the criteria for an AI Incident as the AI system's malfunction or insufficient performance has directly contributed to significant harm.
Thumbnail Image

Singapore Issues Warning To X, TikTok Over Child Sexual, Terrorism Content

2026-03-31
BERNAMA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by X and TikTok for automated detection of harmful content, specifically CSEM and terrorism-related material. The regulator's warning and required improvements indicate prior issues with these AI systems' effectiveness, which have implications for harm to children and communities. However, the article does not describe a specific incident of harm occurring due to AI malfunction or misuse, but rather focuses on regulatory oversight and planned improvements. Therefore, this is best classified as Complementary Information, as it provides context on governance responses and ongoing efforts to address AI-related harms rather than reporting a new AI Incident or Hazard.
Thumbnail Image

IMDA issues letters of caution to X and TikTok over weaknesses in detecting and removing harmful content

2026-03-31
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI systems for detecting harmful content and highlights their serious weaknesses, which have directly resulted in the prolonged presence of harmful material such as CSEM and terrorism-related content on the platforms. This constitutes harm to communities and violations of legal obligations. The involvement of AI in the development and use phases is clear, and the harm is realized, not just potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Singapore regulator places X and TikTok under supervision

2026-04-01
Telecompaper
Why's our monitor labelling this an incident or hazard?
The platforms use AI systems for content moderation, which is reasonably inferred from their role in detecting and removing harmful content. The regulator's report identifies shortcomings in these AI systems' performance, which relates to potential or past harm. However, the article focuses on the regulatory response (letters of caution and enhanced supervision) rather than describing a new incident of harm caused by AI. Thus, it does not report a new AI Incident or AI Hazard but rather governance actions addressing known issues, fitting the definition of Complementary Information.
Thumbnail Image

IMDA warns X, TikTok over 'serious weaknesses' in content safety

2026-04-01
Singapore Business Review
Why's our monitor labelling this an incident or hazard?
The platforms use AI-based content detection systems to identify and remove harmful content proactively. The IMDA's findings of increased harmful content indicate that these AI systems have malfunctioned or failed in their intended use, leading to harm to communities through exposure to child sexual exploitation material and terrorism content. This meets the criteria for an AI Incident as the AI system's malfunction has directly led to harm and legal violations under the Code of Practice for Online Safety.
Thumbnail Image

TikTok, X warned by IMDA after failing to detect and remove child sexual abuse, terrorism content

2026-04-01
singaporelawwatch.sg
Why's our monitor labelling this an incident or hazard?
The platforms employ AI-based detection tools to identify harmful content. The failure to accurately detect and remove child sexual exploitation and terrorism content indicates a malfunction or inadequacy in these AI systems. This failure has directly led to the presence and dissemination of harmful content, which constitutes harm to communities and violations of rights. The regulatory response and requirement for rectification further confirm the materialized harm. Hence, this qualifies as an AI Incident.