
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A study published in Nature found that TikTok's AI-driven recommendation algorithm systematically prioritized pro-Republican content in New York, Texas, and Georgia ahead of the 2024 US presidential election. Researchers using dummy accounts observed significant partisan bias, raising concerns about the algorithm's impact on political information exposure and democratic fairness.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: TikTok's recommendation algorithm, which uses AI to curate content for users. The study demonstrates that the AI system's use has directly led to a significant harm—systematic political bias in content exposure—which can be considered harm to communities by skewing political information and potentially influencing election outcomes. This constitutes a violation of the right to access balanced information and can undermine democratic processes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of biased political information dissemination during a critical election period.[AI generated]