TikTok's AI Algorithms Enable Spying and Extremist Content Exposure

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok's AI-driven recommendation and data processing systems enabled ByteDance employees to spy on journalists, violating privacy and press freedoms. Additionally, a U.S. government experiment found TikTok's algorithm exposed users to Nazi content within 75 minutes, highlighting the platform's role in spreading harmful extremist material.[AI generated]

Why's our monitor labelling this an incident or hazard?

TikTok's recommendation algorithm is an AI system that infers user preferences and serves content accordingly. The experiment showed that without any user input, the AI system led to Nazi and extremist content within 75 minutes, demonstrating a direct link between the AI system's outputs and exposure to harmful content. This exposure can harm communities by spreading extremist ideologies and misinformation, which aligns with harm to communities under the AI Incident definition. The article also details TikTok's moderation challenges and delayed responses to extremist content, reinforcing the AI system's role in harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's content recommendations.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityAccountabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
WorkersConsumersGeneral public

Harm types
Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI incident

Business function:
Marketing and advertisementICT management and information securityMonitoring and quality control

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Jan. 6 Committee Experiment Found TikTok Went From Zero To Nazi in 75 Minutes

2023-01-05
Rolling Stone
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that infers user preferences and serves content accordingly. The experiment showed that without any user input, the AI system led to Nazi and extremist content within 75 minutes, demonstrating a direct link between the AI system's outputs and exposure to harmful content. This exposure can harm communities by spreading extremist ideologies and misinformation, which aligns with harm to communities under the AI Incident definition. The article also details TikTok's moderation challenges and delayed responses to extremist content, reinforcing the AI system's role in harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's content recommendations.
Thumbnail Image

BBC Journalists Urge Bosses To Cool Their TikTok "Obsession" After App Spied On Reporters

2023-01-06
Deadline
Why's our monitor labelling this an incident or hazard?
TikTok's AI-driven data collection and recommendation system was used to spy on journalists, which is a direct violation of privacy and press freedoms, falling under violations of human rights and legal protections. The harm has already occurred as journalists' data was accessed improperly by employees of TikTok's parent company. The BBC's internal concerns and the ongoing use of TikTok despite these issues underscore the incident's significance. The AI system's role is pivotal as the spying was enabled by TikTok's AI-powered data processing capabilities. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok Pauses US Security Deal Amid Rising Questions About the App's Connection with the CCP

2023-01-06
Social Media Today | A business community for the web's best thinkers on Social Media
Why's our monitor labelling this an incident or hazard?
TikTok is an AI system as it uses algorithmic content recommendation and data processing. The reported spying on journalists via TikTok involves the use of the AI system's capabilities to monitor individuals without consent, constituting a violation of rights and harm to persons. The event describes actual harm (privacy violations and spying), not just potential harm, and the US government's regulatory responses indicate the seriousness of the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bad sign for TikTok security talks with Biden team as consultant hires delayed

2023-01-06
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (TikTok's algorithm) and its use, specifically the concerns about misuse or manipulation by the parent company. However, no direct or indirect harm has been reported as having occurred due to the AI system's development, use, or malfunction. The focus is on the potential security risks and the stalled efforts to mitigate them through consultant reviews. This fits the definition of Complementary Information, as it provides context and updates on governance and security discussions around an AI system without describing a specific AI Incident or AI Hazard.