TikTok Algorithm Systematically Favored Republican Content During 2024 US Elections

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study published in Nature found that TikTok's AI-driven recommendation algorithm systematically prioritized pro-Republican content in New York, Texas, and Georgia ahead of the 2024 US presidential election. Researchers using dummy accounts observed significant partisan bias, raising concerns about the algorithm's impact on political information exposure and democratic fairness.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly: TikTok's recommendation algorithm, which uses AI to curate content for users. The study demonstrates that the AI system's use has directly led to a significant harm—systematic political bias in content exposure—which can be considered harm to communities by skewing political information and potentially influencing election outcomes. This constitutes a violation of the right to access balanced information and can undermine democratic processes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of biased political information dissemination during a critical election period.[AI generated]
AI principles
FairnessDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicConsumers

Harm types
Public interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

TikTok's algorithm favored Republican content in 2024 US elections, study finds

2026-05-06
The Guardian
Why's our monitor labelling this an incident or hazard?
The TikTok recommendation algorithm is an AI system influencing content exposure. The study shows systematic bias in content recommendations that could plausibly lead to harm by skewing political information and affecting electoral fairness, which is harm to communities and potentially a violation of rights. Since the article does not report actual realized harm but highlights a credible risk and imbalance caused by the AI system, this qualifies as an AI Hazard. There is no indication of direct or indirect harm having already occurred, so it is not an AI Incident. The article is not merely complementary information because it reports new findings about algorithmic bias with potential societal impact, nor is it unrelated as it clearly involves an AI system and its societal effects.
Thumbnail Image

TikTok's algorithm systematically skewed to the right during the 2024 US elections

2026-05-06
Nature
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: TikTok's recommendation algorithm, which uses AI to curate content for users. The study demonstrates that the AI system's use has directly led to a significant harm—systematic political bias in content exposure—which can be considered harm to communities by skewing political information and potentially influencing election outcomes. This constitutes a violation of the right to access balanced information and can undermine democratic processes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of biased political information dissemination during a critical election period.
Thumbnail Image

Researchers expose algorithm skew that boosted Trump in 2024

2026-05-06
Raw Story
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that influenced the political content shown to users, leading to a partisan imbalance favoring one political side. This systematic bias in information exposure can distort democratic processes and harm communities by skewing political opinions and election outcomes. Since the study shows this effect has already occurred and influenced voter behavior, it constitutes an AI Incident due to indirect harm to communities and democratic rights.
Thumbnail Image

Study says that TikTok algorithm prioritized Republican content - UPI.com

2026-05-07
UPI
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that generates content recommendations based on user behavior. The study found that this AI system systematically prioritized Republican content over Democrat content, leading to asymmetric partisan exposure. This bias in content recommendation can be reasonably linked to harm to communities by influencing political perceptions and potentially undermining fair democratic engagement. The harm is realized as the biased exposure has already occurred, not merely a potential risk. Therefore, this event meets the criteria for an AI Incident due to the AI system's use directly leading to harm to communities through biased information dissemination.
Thumbnail Image

Study says that TikTok algorithm prioritized Republican content

2026-05-07
Yahoo
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that influences content exposure. The study reveals partisan bias in recommendations, which could plausibly lead to harm to communities by skewing political information and potentially influencing election outcomes or social cohesion. However, the article does not describe actual harm or incidents resulting from this bias, only the potential for such harm. Thus, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article is not updating or responding to a prior incident but reporting new research findings. It is not Unrelated because it clearly involves an AI system and its societal impact.