X's Algorithm Shifts Users Toward Conservative Political Views, Study Finds

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A large-scale study published in Nature found that X's (formerly Twitter) AI-driven feed algorithm nudges users toward more conservative political attitudes. The experiment with nearly 5,000 US users showed these effects persist even after switching back to a chronological feed, raising concerns about algorithmic influence on democratic discourse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as the algorithmic timeline of X, which orders content based on predicted user engagement. The study shows that this AI system's use has directly led to shifts in users' political opinions, a form of harm to communities through manipulation of information exposure. Although the harm is subtle and accumulative rather than immediate or physical, it fits within the framework's definition of harm to communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and documented through the controlled experiment.[AI generated]
AI principles
FairnessDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Public interest

Severity
AI incident

Business function:
Other

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Ni 'fake news', ni silenciar cuentas: así consigue la red social X de Elon Musk que te hagas de derechas sin darte cuenta

2026-02-18
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the algorithmic timeline of X, which orders content based on predicted user engagement. The study shows that this AI system's use has directly led to shifts in users' political opinions, a form of harm to communities through manipulation of information exposure. Although the harm is subtle and accumulative rather than immediate or physical, it fits within the framework's definition of harm to communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and documented through the controlled experiment.
Thumbnail Image

El algoritmo de la red social X prioriza contenido conservador sobre informaciones contrastadas

2026-02-18
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the feed algorithm of X, which influences content visibility and user engagement. The study provides empirical evidence that the AI system promotes conservative and potentially misleading content, which has led to increased polarization and misinformation dissemination, constituting harm to communities. Although the study does not prove direct changes in political affiliation, the algorithm's role in shaping public agenda and amplifying less reliable sources is a clear indirect cause of societal harm. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm to communities through biased information amplification and potential misinformation spread.
Thumbnail Image

X's Algorithm Makes Users More Conservative, Study Suggests

2026-02-18
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (X's algorithmic feed) that influences content exposure and user political views, which can be reasonably inferred as an AI system due to its algorithmic ranking and content promotion. The study shows that the algorithm's use leads to a measurable shift in user attitudes, which can be considered an indirect harm to communities through political polarization. However, the article does not report a specific incident of harm or violation but rather presents research findings on potential societal impacts. Therefore, this qualifies as Complementary Information, providing important context and understanding of AI's societal effects without describing a concrete AI Incident or imminent hazard.
Thumbnail Image

El algoritmo de X, la red social de Elon Musk, a examen: así puede influir la pestaña 'Para ti' en tu visión de la política

2026-02-18
El Español
Why's our monitor labelling this an incident or hazard?
The AI system in question is the recommendation algorithm of X, which uses AI to reorder and select content for users. The study provides empirical evidence that this AI system's outputs have directly influenced users' political attitudes and behaviors, causing polarization and ideological shifts. This manipulation of political views is a form of harm to communities and potentially a violation of rights related to fair information access. Since the harm is realized and documented, this qualifies as an AI Incident rather than a hazard or complementary information. The article focuses on the impact of the AI system's use, not just on research or policy responses, so it is not complementary information. It is not unrelated because the AI system and its effects are central to the event.
Thumbnail Image

Turning on the 'for you' feed on X shifted political opinions, but turning it off did not

2026-02-18
Nature
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the feed algorithm) whose use was experimentally manipulated to study its effects on political attitudes. The algorithm's use led to measurable shifts in political opinions and engagement, indicating a significant influence on users' information environment. However, the article does not describe any direct or indirect harm occurring to individuals, groups, or communities, nor does it report violations of rights or disruptions. The study's findings contribute to understanding AI's societal impact and the persistence of algorithmic influence, which is valuable for governance and risk assessment. Since no actual harm or plausible immediate harm is reported, and the focus is on research results and implications, the event fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

The political effects of X's feed algorithm - Nature

2026-02-18
Nature
Why's our monitor labelling this an incident or hazard?
The event involves an AI system—the feed algorithm—that influences content curation and user engagement. The study demonstrates that the algorithmic feed affects political attitudes and content exposure, which could have societal implications. However, the article does not describe any realized harm such as injury, rights violations, or other significant harms directly caused by the AI system. The findings are empirical evidence of the algorithm's effects but do not report an incident of harm or a plausible immediate hazard. Therefore, this event is best classified as Complementary Information, as it provides detailed research findings that enhance understanding of AI's societal impacts without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Un estudio lo confirma: el algoritmo de X (Twitter) radicaliza a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
El Periódico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the recommendation algorithm of X, which uses personalized and other inputs to curate content feeds. The study demonstrates that this AI system's use has directly led to ideological radicalization and amplification of extremist content, which constitutes harm to communities and social fabric. This fits the definition of an AI Incident because the AI system's use has directly led to significant, clearly articulated harm (social polarization and radicalization). The article does not merely warn of potential harm but confirms realized effects based on empirical research.
Thumbnail Image

Un estudio muestra que el algoritmo de X prioriza el contenido relacionado con posiciones políticas conservadoras

2026-02-18
Cadena SER
Why's our monitor labelling this an incident or hazard?
The algorithm in question is an AI system that influences content recommendation and user engagement. The study shows that the AI system biases content towards conservative political views, which can be seen as a form of harm to communities by shaping political attitudes. However, the article does not report a specific event where this bias caused direct or indirect harm, nor does it describe a malfunction or misuse leading to harm. Instead, it presents research findings that enhance understanding of AI's societal impact. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

El 'Para ti' de X conduce a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
Deia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the algorithmic feed on X, which filters and orders content to personalize user experience. The study shows that this AI system's use directly led to users adopting more conservative political views, indicating harm to communities via political polarization and manipulation of information. The harm is realized and documented through experimental evidence, not merely potential. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to significant societal harm.
Thumbnail Image

Demostrado por la ciencia: el algoritmo de X no es neutral y conduce a posiciones políticas conservadoras

2026-02-18
HERALDO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithmic recommendation system that filters and orders social media content to personalize user feeds. The study shows that the use of this AI system has directly led to significant shifts in users' political attitudes, which constitutes harm to communities through political polarization and manipulation of information exposure. Since the AI system's use has caused realized harm (political bias and polarization), this qualifies as an AI Incident under the framework. The article does not merely discuss potential risks or governance responses but documents actual effects demonstrated by empirical research.
Thumbnail Image

X's Algorithm Pushes Users to Lean More Conservative, Researchers Find

2026-02-18
Gizmodo
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system that curates content for users, and its use has directly led to a shift in political opinions among users, favoring conservative views. This constitutes harm to communities by influencing political attitudes and potentially undermining balanced information access, which aligns with the definition of an AI Incident. The study provides evidence of realized harm rather than potential harm, so the event is best classified as an AI Incident.
Thumbnail Image

Así es cómo el algoritmo de 'X' de Elon Musk empuja a los usuarios hacia posiciones conservadoras

2026-02-18
Público.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithmic feed that curates social media content. The study demonstrates that the AI system's use has directly influenced users' political attitudes, pushing them towards more conservative views, which is a form of harm to communities and political discourse. The influence extends beyond direct users, affecting broader public agenda and media narratives, indicating indirect harm. Since the harm is realized and linked to the AI system's use, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A few weeks of X's algorithm can make you more right-wing - and it doesn't wear off quickly

2026-02-18
The Conversation
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as X's algorithm that governs content feeds. The study demonstrates that the AI system's use directly leads to shifts in political opinions and engagement patterns, which are forms of harm to communities by influencing political attitudes and potentially undermining democratic discourse. The harm is realized and measurable, not merely potential. The article also discusses the broader societal implications and calls for governance responses, but the core event is the documented impact of the AI system on users' political views, fitting the definition of an AI Incident.
Thumbnail Image

El algoritmo de X conduce a los usuarios hacia posiciones políticas más conservadoras, según un estudio

2026-02-18
Telemundo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the algorithmic feed on X, which filters and orders content to influence user engagement. The study shows that the use of this AI system has directly led to harm in the form of political polarization and altered political attitudes, which are harms to communities. The harm is realized and documented through empirical research, not merely potential. Therefore, this meets the criteria for an AI Incident because the AI system's use has directly led to significant societal harm.
Thumbnail Image

El 'Para ti' de X conduce a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the recommendation algorithm on X—that filters and orders content to influence user engagement. The study demonstrates that the algorithm's use has directly led to a shift in users' political opinions towards conservatism, which is a form of harm to communities by fostering polarization and potentially undermining democratic discourse. The harm is realized and documented through experimental evidence, not merely potential. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to significant societal harm.
Thumbnail Image

El algoritmo de X impulsa opiniones más conservadoras, según estudio - Tecnología - ABC Color

2026-02-18
ABC Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithm that filters and orders social media content to personalize user feeds. The study demonstrates that the use of this AI system has directly led to a significant shift in users' political opinions towards conservatism and altered their online behavior, which is a form of harm to communities through political polarization and manipulation of information exposure. The harm is realized and documented, not merely potential. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

¿Sabías que hay un algoritmo de X que conduce a los usuarios hacia posiciones políticas más conservadoras?

2026-02-18
Vanguardia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the feed algorithm on X, which filters and personalizes content. The study demonstrates that the algorithm's use directly led to increased political polarization and animosity, which is harm to communities. The harm is realized and measurable, not just potential. The AI system's role is pivotal in causing this harm by shaping user content exposure and influencing political attitudes. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un estudio publicado en Nature revela que el algoritmo de X impulsa a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
La Opinion A Coruña - laopinioncoruna.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly—the recommendation algorithm of X—that shapes user content feeds. The study provides evidence that the AI system's use has directly caused shifts in political attitudes, which can be considered harm to communities by influencing public opinion and potentially affecting democratic processes. The harm is realized and documented, not merely potential. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information. The article does not focus on responses or mitigation but on the documented impact of the AI system's use.
Thumbnail Image

Study: The political effects of X's feed algorithm

2026-02-19
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the feed algorithm) whose use directly led to changes in political attitudes and engagement among users, which can be considered harm to communities. The study provides experimental evidence that the AI system's outputs influence political opinions and behavior, which aligns with the definition of an AI Incident. Although no physical harm or legal violations are reported, the societal impact on political attitudes and potential polarization is a significant harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un estudio publicado en Nature revela que el algoritmo de X impulsa a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
El Periódico Extremadura
Why's our monitor labelling this an incident or hazard?
The algorithm of X is an AI system that recommends content to users. The study shows that its use has directly led to shifts in political attitudes, which is a form of harm to communities by influencing political polarization and public opinion. The harm is realized and experimentally demonstrated, not merely potential. The event involves the use of an AI system and its direct causal role in influencing user behavior and attitudes, fitting the definition of an AI Incident.
Thumbnail Image

Algoritmo de X (antes Twitter) promueve contenido conservador y reduce visibilidad de medios tradicionales, revela estudio

2026-02-18
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The study explicitly involves an AI system—the feed recommendation algorithm of X—that influences user content exposure and political attitudes. The algorithm's use has directly led to changes in users' political opinions and content consumption patterns, which constitutes harm to communities by potentially skewing political discourse and information access. Since the harm is realized and documented through the experiment, this qualifies as an AI Incident rather than a hazard or complementary information. The event is not merely about AI research or product updates but about actual effects caused by the AI system's use.
Thumbnail Image

El algoritmo de X conduce a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
Metro
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the algorithmic feed of X—that filters and orders content, influencing users' political opinions. The study provides evidence that this AI system's use has directly led to harm by shaping political attitudes towards conservatism and increasing engagement with conservative content, which can contribute to societal polarization and misinformation. This constitutes harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and documented, not merely potential, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

X Algorithm Skews Users Right, Effects Linger

2026-02-18
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (X's algorithm) that governs content ranking and exposure, directly influencing users' political opinions and attitudes, which is a form of harm to communities. The study provides empirical evidence of this effect, showing that the AI system's use has led to significant shifts in political views and engagement patterns, which are societal harms. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

El algoritmo de X empuja a los usuarios a posiciones políticas más conservadoras

2026-02-18
Agencia Sinc
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, namely the algorithmic feed that selects and prioritizes content for users based on complex data processing and user behavior. The study shows that the use of this AI system has directly led to a measurable shift in users' political opinions towards conservatism, which is a form of harm to communities by influencing political polarization and potentially affecting democratic discourse. The harm is realized and documented through the research findings, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant societal harm.
Thumbnail Image

How X's Algorithm Influences Political Leanings | Technology

2026-02-19
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system influencing political opinions, which is a form of harm to communities. The study provides empirical evidence that the AI system's use has directly led to a measurable shift in political attitudes, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the study documents actual changes in user beliefs caused by the AI system. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El algoritmo de X conduce a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
UDG TV
Why's our monitor labelling this an incident or hazard?
The algorithmic feed on X is an AI system that filters and personalizes content for users. The study demonstrates that this AI system's use has directly led to significant influence on users' political opinions and behavior, which constitutes harm to communities through political polarization and manipulation of information exposure. Since the harm (influence on political attitudes and potential societal polarization) is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

El algorimo de X conduce a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
Diario de Los Andes
Why's our monitor labelling this an incident or hazard?
The algorithm in question is an AI system that filters, selects, and orders content to personalize user feeds. The study provides evidence that this AI system's use has directly led to significant shifts in users' political attitudes towards conservatism, which can be considered harm to communities by fostering polarization and potentially undermining democratic discourse. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

El algoritmo de X conduce a los usuarios hacia posiciones políticas más conservadoras

2026-02-18
Infogate.cl
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—the algorithmic feed of X—that filters and orders content to influence user engagement and political attitudes. The study demonstrates the algorithm's significant influence on users' political positioning, which is a societal impact of AI. However, the article does not describe any realized harm such as injury, rights violations, or disruption caused by the AI system, nor does it describe a plausible future harm scenario. Instead, it reports research findings that provide context and deepen understanding of AI's role in shaping political opinions and digital environments. This aligns with the definition of Complementary Information, which includes significant research findings with broad implications and governance relevance, without constituting an AI Incident or Hazard.
Thumbnail Image

The political effects of X's feed algorithm

2026-02-18
Ben Werdmüller
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the feed algorithm) that influences political attitudes, which can be considered harm to communities through political manipulation and potential disruption of democratic processes. The study documents realized effects on users' political views, indicating actual harm rather than just potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant societal harm by shaping political opinions and potentially affecting elections.
Thumbnail Image

El algoritmo de X conduce a los usuarios hacia posiciones políticas conservadoras, según estudio publicado por "Nature"

2026-02-18
contrapunto.com
Why's our monitor labelling this an incident or hazard?
The algorithm on X is an AI system that filters and personalizes content feeds. The study shows that its use has directly led to changes in users' political opinions and behaviors, which is a harm to communities through political polarization and manipulation of information exposure. This fits the definition of an AI Incident because the AI system's use has directly led to harm (polarization and influence on political attitudes). The article does not merely warn of potential harm but documents realized effects from the AI system's operation.
Thumbnail Image

Un experimento revela que el algoritmo de X cambia tu forma de pensar a nivel político (y no es neutral)

2026-02-18
Muy Interesante
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the algorithmic recommendation system of X. The study experimentally shows that the AI system's use has directly led to changes in users' political attitudes, which is a form of harm to communities by influencing democratic deliberation and political perception. The harm is realized and measurable, not merely potential. Although the harm is subtle and indirect, it fits within the definition of harm to communities caused by AI. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Estudo conclui que algoritmo do X empurra utilizadores para posições mais conservadoras

2026-02-19
SAPO
Why's our monitor labelling this an incident or hazard?
The algorithm of X is an AI system that recommends content based on complex user data and interaction patterns. The study provides empirical evidence that the algorithm's use has directly influenced users' political opinions, pushing them towards more conservative positions and reducing exposure to traditional media. This influence constitutes harm to communities by affecting political attitudes and potentially contributing to polarization, which aligns with harm category (d) "Harm to property, communities, or the environment." Since the harm is realized and linked to the AI system's use, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Estudo indica que algoritmo da rede social X leva a posições políticas mais conservadoras

2026-02-19
SAPO
Why's our monitor labelling this an incident or hazard?
The social media platform's algorithm is an AI system that filters and personalizes content feeds. The study provides evidence that its use has directly influenced users' political views towards conservatism, which constitutes harm to communities by shaping political attitudes and potentially contributing to polarization. This meets the criteria for an AI Incident because the AI system's use has directly led to significant societal harm. Although the harm is non-physical, it falls under harm to communities and influence on the political environment, which is recognized as a form of harm in the framework. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Así dirige el algoritmo de X a los usuarios hacia posiciones políticas más de derechas

2026-02-19
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the algorithmic feed of X, which uses AI to filter, select, and order content for users. The study shows that the use of this AI system has directly led to harm in the form of manipulation of political opinions and polarization, which is a harm to communities and a violation of rights related to access to unbiased information. The harm is realized and documented through experimental evidence. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to communities and rights.
Thumbnail Image

Algoritmo da rede social X conduz a posições políticas mais conservadoras

2026-02-19
Publico
Why's our monitor labelling this an incident or hazard?
The social media platform's algorithm is an AI system that filters and orders content to personalize user feeds. The study provides evidence that the AI system's use has directly led to changes in users' political attitudes, favoring conservative views and reducing exposure to other perspectives, which is a form of harm to communities through manipulation of information and political polarization. Since the harm is realized and linked to the AI system's use, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Algoritmo da rede social X leva a posições políticas mais conservadoras, revela estudo

2026-02-19
RTP - Rádio Televisão Portuguesa
Why's our monitor labelling this an incident or hazard?
The study explicitly involves an AI system (the algorithmic feed) that influences user content and behavior. However, the article does not report any actual harm (such as injury, rights violations, or community harm) caused by the AI system, nor does it describe a credible risk of future harm from the AI system's use. Instead, it presents research findings on the AI system's influence on political attitudes, which is valuable contextual information. This fits the definition of Complementary Information, as it enhances understanding of AI's societal effects without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Estudo da Nature conclui que algoritmo da X favorece posições conservadoras

2026-02-19
Jornal Expresso
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the algorithmic feed of the social media platform X) that filters and personalizes content, influencing users' political opinions. This algorithmic influence has directly led to harm in the form of polarization and bias in political attitudes, which can be considered harm to communities and a violation of rights related to access to balanced information. Since the harm is realized and documented, this qualifies as an AI Incident rather than a hazard or complementary information. The study's findings demonstrate the AI system's direct role in causing societal harm through biased content curation.
Thumbnail Image

Estudo indica que algoritmo da rede social X leva a posições políticas mais conservadoras

2026-02-19
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The social media platform's algorithm is an AI system that filters and personalizes content feeds. The study demonstrates that its use has directly led to users adopting more conservative political positions, indicating a significant influence on political attitudes and community polarization. This constitutes harm to communities, a recognized category of AI harm. The harm is realized and documented through the study's findings, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Inside the "gay tech mafia" that mixes social and professional lives, as investors, entrepreneurs, and executives detail gay influence in Silicon Valley

2026-02-19
Techmeme
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (algorithmic feed) whose use has directly led to harm by shifting political opinions and engagement in a way that affects societal discourse and community perceptions. The article highlights the causal link between the AI system's operation and the resulting political influence, which constitutes harm to communities. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Research Shows That X Amplifies Conservative Political Views

2026-02-19
Social Media Today | A business community for the web's best thinkers on Social Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—X's algorithmic feed—that shapes political attitudes by promoting certain content and demoting others. The study shows that this AI system's use has directly led to harm by amplifying conservative political views, influencing political opinions, and potentially contributing to societal polarization and misinformation. This constitutes harm to communities and a violation of rights to unbiased information, fitting the definition of an AI Incident. The article does not merely discuss potential harm or responses but documents realized effects of the AI system's use.
Thumbnail Image

Social sciences: X's algorithm may influence political attitudes (Nature)

2026-02-20
Nature Middle East
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'For You' algorithm) that curates social media content and thereby influences users' political opinions. The research shows that this influence is significant and persistent, indicating a direct causal link between the AI system's use and changes in political attitudes. Such influence on political views constitutes harm to communities by potentially increasing polarization and shaping political behavior, fitting the definition of an AI Incident. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Algoritmo da rede social X leva a posições políticas mais conservadoras - Estudo

2026-02-19
Executive Digest
Why's our monitor labelling this an incident or hazard?
The social media platform's algorithmic feed is an AI system that filters and personalizes content. The study demonstrates that its use has directly led to users adopting more conservative political views and interacting more with conservative content, which is a form of harm to communities by influencing political attitudes and potentially increasing polarization. The harm is realized and documented by the study, not merely potential. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to significant societal harm.
Thumbnail Image

Rede social X privilegia conteúdos conservadores e influencia os utilizadores - Tek Notícias

2026-02-19
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the social media platform's recommendation algorithm) influencing user perceptions and content exposure. The influence is documented and ongoing, but no direct or realized harm (such as injury, rights violations, or significant community harm) is reported. The study highlights the plausible risk of polarization and misinformation due to algorithmic influence, but these are potential or indirect harms rather than confirmed incidents. Therefore, the event fits best as Complementary Information, providing important context and understanding about AI's societal influence without reporting a specific AI Incident or imminent AI Hazard.
Thumbnail Image

Algoritmo do X pode favorecer conteúdo político conservador, aponta estudo

2026-02-19
VEJA
Why's our monitor labelling this an incident or hazard?
The algorithmic feed is an AI system that personalizes content for users. The study demonstrates that its use has directly influenced users' political attitudes and engagement patterns, favoring conservative content and reducing visibility of traditional news sources. This influence on political attitudes and information exposure constitutes harm to communities and possibly a violation of rights related to access to balanced information. Since the harm is realized and linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

Algoritmo do X empurra os usuários para a direita

2026-02-20
Portal EcoDebate
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the feed algorithm of X, which uses AI techniques to reorder and recommend content. The study demonstrates that the algorithm's use has directly led to political radicalization and polarization, which are harms to communities and democratic processes. The harm is realized and empirically measured, not merely potential. The article also discusses regulatory challenges and societal impacts, but the core event is the AI system's use causing significant social harm. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

How did X's 'For You' feed shift politics?

2026-02-19
AllToc
Why's our monitor labelling this an incident or hazard?
The algorithmic feed is an AI system that selects and ranks content based on inferred user preferences and engagement, influencing the information users receive. The study shows that this AI-driven content curation has directly led to shifts in political opinions and priorities among users, which constitutes a significant harm to communities by impacting democratic processes and public discourse. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

X's "For You" Algorithm May Be Able to Shift Political Views Permanently, New Study Finds

2026-02-19
The Debrief
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the algorithmic feed of a social media platform—that curates content and influences user behavior and political attitudes. The study shows that the AI system's use has directly led to measurable shifts in political views and engagement, which can be considered harm to communities due to the impact on democratic processes and political polarization. The harm is realized and empirically demonstrated, not merely potential. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant societal harm.
Thumbnail Image

Estudo conclui que o algoritmo do X empurra utilizadores para posições mais conservadoras

2026-02-19
Forbes Portugal
Why's our monitor labelling this an incident or hazard?
The algorithmic feed is an AI system that selects content based on user preferences and interaction patterns. The study demonstrates that its use has directly influenced users' political attitudes, pushing them towards more conservative views and reducing exposure to traditional media. This manipulation of information and alteration of political opinions constitutes harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and documented, not merely potential, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

Why did X's algorithm shift users right?

2026-02-20
AllToc
Why's our monitor labelling this an incident or hazard?
The algorithmic feed is an AI system that curates content based on user data and behavior. The study shows that its use caused a direct shift in political views, which constitutes harm to communities as it affects societal discourse and political polarization. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

Un estudio explica cómo el algoritmo de X prioriza las posturas políticas más conservadoras

2026-02-20
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the recommendation algorithm of X) whose use has indirectly led to harm in the form of ideological polarization and potential harm to communities by reinforcing political biases and limiting exposure to diverse viewpoints. Although no physical harm or direct legal violations are reported, the amplification of polarized political content and ideological isolation constitutes harm to communities and public discourse, fitting the definition of an AI Incident. The study provides evidence of realized effects, not just potential risks, thus it is not merely a hazard or complementary information.
Thumbnail Image

El algoritmo "Para ti" de X empuja a usuarios de la plataforma hacia posiciones más conservadoras

2026-02-20
Diario Uno
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system that curates content for users, influencing their information exposure and interactions. The study shows that the algorithm's use caused a shift towards more conservative political views, which can be interpreted as a form of societal harm through polarization and manipulation of information exposure. This fits the definition of an AI Incident because the AI system's use has indirectly led to harm to communities by affecting political attitudes and potentially undermining democratic quality. The event is not merely a general AI-related news or a future risk but documents realized effects of the AI system's operation.
Thumbnail Image

X's 'For You' Feed Could Be Making Users More Conservative, Study ...

2026-02-21
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The algorithmic feed is an AI system that curates content based on user data and engagement. The study shows that its use has directly led to a distortion of public opinion by promoting certain political content and suppressing others, which constitutes harm to communities. This is a realized harm, not just a potential one, as evidenced by the measurable change in users' views. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm to communities through biased information dissemination.
Thumbnail Image

El algoritmo de X silencia a las mujeres: el impacto en la política

2026-02-21
Artículo 14
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the algorithm of X) that optimizes content visibility based on user interaction patterns, which are predominantly male. This leads to systemic bias and marginalization of women in political discourse, causing social harm and potentially violating rights to equal participation. The harm is realized and ongoing, as evidenced by the described effects on women's participation and the reinforcement of conservative, male-dominated narratives. Thus, the event meets the criteria for an AI Incident due to indirect harm caused by the AI system's use.
Thumbnail Image

El algoritmo de X empuja a posiciones conservadoras - EcoAvant.com

2026-02-20
EcoAvant.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the algorithmic content recommendation system of X, which selects posts for users. The study shows that the use of this AI system has directly influenced users' political opinions, pushing them towards more conservative views, which is a form of harm to communities through increased polarization and potential social disruption. The harm is realized and documented through empirical research, not merely potential. The AI system's role is pivotal in causing this effect, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or responses but reports on actual observed impacts, excluding classification as a hazard or complementary information.