TikTok's AI Algorithm Accused of Racial Bias and Privacy Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok's AI-driven recommendation algorithm has been found to create filter bubbles based on users' race and appearance, reinforcing social biases and limiting exposure to diverse content. Additionally, experts warn of privacy violations, illegal data collection, and censorship linked to TikTok's Chinese ownership, raising concerns about surveillance and human rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The TikTok recommendation algorithm is an AI system that suggests accounts to follow based on user behavior. The researcher's findings indicate that the algorithm's outputs are biased along racial and physical appearance lines, which can lead to violations of rights and harm to communities by reinforcing social biases and limiting visibility of marginalized groups. The harm is indirect but materialized, as it affects users' exposure and social inclusion. Additionally, TikTok's prior admission of suppressing content from queer, fat, and disabled users supports the presence of discriminatory impacts. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders

In other databases

Articles about this incident or hazard

Thumbnail Image

TikTok parental controls - the 6 settings you need to change right now to protect your kids

2020-02-25
The Sun
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content recommendation and moderation, which can indirectly lead to harms such as exposure to inappropriate content and predatory behavior targeting children. The article highlights these harms and suggests parental control settings to mitigate risks. However, it does not describe a specific event where the AI system directly or indirectly caused new harm, nor does it describe a plausible future harm scenario beyond the known risks. The main focus is on raising awareness and advising on safety settings, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

TikTok Launches New Parental Controls to Keep Kids Safe

2020-02-24
MakeUseOf
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the form of content moderation and user interaction controls to protect children on TikTok. However, the article does not describe any harm or incident resulting from these AI systems, nor does it indicate any plausible future harm. Instead, it reports on new safety features intended to reduce potential harm. Therefore, this is best classified as Complementary Information, as it provides context on societal and technical responses to AI-related risks in social media platforms.
Thumbnail Image

There's something strange about TikTok recommendations

2020-02-25
Vox
Why's our monitor labelling this an incident or hazard?
The article discusses an AI-powered recommendation system on TikTok that appears to recommend accounts with similar physical traits to the ones followed, potentially reinforcing biases and filter bubbles. While this raises concerns about social harm and bias, no actual harm or violation of rights has been documented or confirmed. The event highlights a plausible risk of harm due to the AI system's behavior but does not report a realized incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Is TikTok's Algorithm Racist? This AI Researcher Says It Has Alarming Bias

2020-02-27
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The TikTok recommendation algorithm is an AI system that suggests accounts to follow based on user behavior. The researcher's findings indicate that the algorithm's outputs are biased along racial and physical appearance lines, which can lead to violations of rights and harm to communities by reinforcing social biases and limiting visibility of marginalized groups. The harm is indirect but materialized, as it affects users' exposure and social inclusion. Additionally, TikTok's prior admission of suppressing content from queer, fat, and disabled users supports the presence of discriminatory impacts. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Why is TikTok creating filter bubbles based on your race?

2020-02-28
WIRED UK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as TikTok's recommendation algorithm, which uses collaborative filtering to personalize content. The algorithm's use and the resulting biased feedback loop have directly led to social harms, including racial bias and segregation in content recommendations, which affect users' rights to equal representation and access to diverse information. These harms align with violations of human rights and harm to communities as defined in the framework. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

TikTok links to China put every user at risk of 'spying and censorship', experts warn

2020-02-27
The Scottish Sun
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-powered social media platform using advanced algorithms to recommend content and process user data. The article documents realized harms including illegal data collection from children, censorship of content related to human rights and political issues, and concerns about surveillance and foreign influence. These constitute violations of privacy and human rights, fitting the definition of an AI Incident. The involvement of AI in content recommendation and data processing is explicit and central to the harms described. The article does not merely warn of potential future harm but reports ongoing and past harms linked to the AI system's use and operation.
Thumbnail Image

TikTok linked to dozens of DEATHS from suicide videos & killer stunts

2020-02-28
The Scottish Sun
Why's our monitor labelling this an incident or hazard?
TikTok's AI-powered recommendation system is central to the dissemination and visibility of harmful content, including suicide videos and dangerous challenges, which have led to real-world deaths and injuries. The platform's failure to promptly remove such content and alert authorities indicates malfunction or inadequate use of AI moderation tools. The harms described include injury and death (a), and harm to communities (d). Given the direct link between the AI system's outputs and these harms, this event qualifies as an AI Incident.
Thumbnail Image

TikTok's boss is mysterious tech billionaire who models himself on Mark Zuckerberg and makes workers do PRESS-UPS

2020-02-26
The Scottish Sun
Why's our monitor labelling this an incident or hazard?
TikTok's AI-driven content recommendation system is central to the app's operation and user experience. The article details how this system has indirectly led to harm by facilitating the exposure of minors to inappropriate and dangerous content, including contact with paedophiles and harmful trends. This constitutes harm to the health and safety of a vulnerable group, fulfilling the criteria for an AI Incident. The article does not merely speculate about potential harm but reports ongoing issues and parental concerns, indicating realized harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How much information about you is the Chinese Government accessing through your TikTok account?

2020-03-05
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (TikTok uses AI for content recommendation, moderation, and facial recognition) and discusses the potential misuse of user data and censorship, which could plausibly lead to harms such as violations of privacy, human rights, and national security risks. However, no direct or indirect harm has been reported as having occurred yet. The concerns are about plausible future harms from data sharing and censorship enabled by AI and data practices. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no incident has been confirmed or realized in the article.
Thumbnail Image

Teens love video app TikTok, but do they love it too much?

2020-03-05
The Myanmar Times
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems for content recommendation and moderation, which are central to its operation. The article outlines potential risks including privacy breaches, censorship aligned with Chinese government interests, and exposure of children to inappropriate content. These concerns represent plausible future harms or ongoing risks rather than documented incidents of harm. There is no explicit mention of a specific event where TikTok's AI directly or indirectly caused injury, rights violations, or other harms. Therefore, the event fits the definition of Complementary Information as it provides context, societal and governance responses, and ongoing assessment of AI-related risks without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

Teens love the video app TikTok. Do they love it too much? | The Associated Press

2020-03-06
BusinessMirror
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content recommendation and personalization, which is reasonably inferred from the description of personalized video feeds. The article highlights potential harms such as espionage, censorship, and exposure to inappropriate content, but these remain concerns or risks rather than documented incidents of harm. There is no direct or indirect evidence of harm having occurred, only plausible future risks and regulatory actions. Therefore, this event fits the definition of Complementary Information as it provides context, societal and governance responses, and ongoing assessment of AI-related risks without describing a specific AI Incident or AI Hazard.