X's AI Algorithm Amplifies Right-Wing Political Content to Uninterested Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Investigations by The Wall Street Journal and The Washington Post found that X's (formerly Twitter) AI-driven recommendation algorithm disproportionately pushes right-leaning political content, especially pro-Trump posts, to users—even those who show no interest in politics—potentially influencing public discourse and undermining trust in election integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the recommendation algorithm) that directly influences the political content users see, leading to disproportionate exposure to partisan and potentially polarizing content. This has direct implications for societal harm by affecting political discourse and community cohesion. The harm is realized, not just potential, as the article documents actual content served and its partisan skew. Therefore, this is an AI Incident due to the AI system's use causing harm to communities through biased content amplification.[AI generated]
AI principles
AccountabilityFairnessTransparency & explainabilityRespect of human rightsDemocracy & human autonomySafety

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
ConsumersGeneral public

Harm types
Public interestHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Elon Musk is backing Trump. Is Musk's Twitter backing Trump, too?

2024-10-29
Business Insider
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through Twitter's content recommendation algorithms, which use AI to curate user feeds. The discussion centers on whether these algorithms amplify right-leaning political content, potentially influenced by user behavior or owner intervention. However, no direct or indirect harm (such as misinformation causing societal harm, violation of rights, or other harms) is documented as having occurred. The article mainly provides analysis and speculation about algorithmic bias and platform management, which fits the definition of Complementary Information. It enhances understanding of AI's societal impact without describing a specific AI Incident or AI Hazard.
Thumbnail Image

X algorithm feeds users political content -- whether they want it or not

2024-10-29
mint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the recommendation algorithm) that directly influences the political content users see, leading to disproportionate exposure to partisan and potentially polarizing content. This has direct implications for societal harm by affecting political discourse and community cohesion. The harm is realized, not just potential, as the article documents actual content served and its partisan skew. Therefore, this is an AI Incident due to the AI system's use causing harm to communities through biased content amplification.
Thumbnail Image

Musk's social media posts have a sudden boost since July, new study reveals

2024-11-01
Business Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses algorithmic manipulation of social media content visibility, which involves AI systems for content curation and recommendation. The manipulation is linked to political influence and potential election interference, which could plausibly lead to harm to communities and democratic processes (harm category d). However, the article does not confirm that this manipulation has directly caused harm yet, only that it plausibly could. The lack of direct evidence of realized harm and the focus on potential influence and opacity of the platform's algorithms align with the definition of an AI Hazard. The event is not merely general AI news or complementary information because it centers on the plausible risk of harm from AI system use in a critical societal context.
Thumbnail Image

Exclusive - X Algorithm Feeds Users Political Content - - Whether They ...

2024-10-30
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The AI system involved is the content recommendation algorithm that infers user interests and decides what posts to show. Its use has directly led to the dissemination of partisan political content and misinformation about election integrity, which constitutes harm to communities by potentially undermining democratic processes and public trust. Since the harm is occurring through the AI system's outputs influencing users, this qualifies as an AI Incident under the framework's definition of harm to communities caused by AI systems.
Thumbnail Image

Elon Musk is backing Trump. Is Musk's Twitter backing Trump, too? | Business Insider India

2024-10-29
Business Insider India
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through Twitter's content recommendation algorithms, which influence what users see. However, it does not document any realized harm such as misinformation causing social disruption, rights violations, or other direct or indirect harms. Nor does it describe a plausible future harm scenario beyond general speculation. The focus is on analysis and theories about algorithmic bias and platform management under Musk's ownership, which fits the definition of Complementary Information as it enhances understanding of AI's societal impact without reporting a new incident or hazard.
Thumbnail Image

X algorithm shows users political content whether they want it or not - Washington Examiner

2024-10-29
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the recommendation algorithm) that is actively shaping the content users see. The algorithm's behavior leads to a form of informational harm by pushing political content that users do not want, which can affect communities by influencing political discourse and potentially sowing doubt about election integrity. This constitutes harm to communities and possibly a violation of users' rights to access information aligned with their preferences. Since the harm is occurring through the AI system's use and its outputs, this qualifies as an AI Incident.
Thumbnail Image

Elon Musk's X May Be Giving Right-Wing Content the Upper Hand

2024-10-30
Vanity Fair
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of recommendation algorithms that influence the content users see on the platform. The investigations show that these AI systems have led to a disproportionate amplification of right-wing content, which can be reasonably inferred to cause harm to communities by skewing political discourse and potentially undermining democratic processes. Although there is no direct evidence of intentional bias, the AI system's outputs have directly led to a significant imbalance in information exposure, which constitutes harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in content recommendation and amplification.
Thumbnail Image

Elon Musk is backing Trump. Is Musk's Twitter backing Trump, too?

2024-10-29
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Twitter's content recommendation algorithms that influence what users see. The article discusses the use and potential influence of these AI systems in amplifying political content, which is relevant to AI's societal impact. However, there is no direct or indirect evidence presented that this amplification has caused harm such as misinformation leading to injury, rights violations, or community harm. Nor does it describe a credible plausible future harm beyond the current observations. The article mainly provides analysis and speculation about the AI system's behavior and its alignment with user preferences or owner influence, which fits the definition of Complementary Information. It enhances understanding of AI's role in political content dissemination but does not report a new incident or hazard.
Thumbnail Image

Elon Musk's X May Be Giving Right-Wing Content the Upper Hand

2024-10-30
DNyuz
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of recommendation algorithms and AI-generated content, which influence the visibility of political messages. However, it does not report any realized harm such as injury, rights violations, or community harm directly caused by these AI systems. Nor does it present a credible risk of future harm from these AI systems beyond existing political discourse dynamics. The focus is on analysis and reporting of algorithmic behavior and its societal implications, fitting the definition of Complementary Information rather than an Incident or Hazard.