AI Chatbot Biases Influence Public Political Opinions

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Studies led by Yale researchers show that large language models like GPT-4o, used in AI chatbots, unintentionally introduce political biases into historical summaries. These biases subtly influence users' social and political opinions, shifting public perception and potentially affecting democratic discourse in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (GPT-4o) is explicitly involved, generating summaries that influence political opinions. This influence constitutes indirect harm to communities by shaping societal perceptions and potentially biasing information, which aligns with harm category (d). Since the harm is occurring (opinion shifts measured) but is subtle and indirect, this qualifies as an AI Incident rather than a hazard. The article does not describe a response or governance action, so it is not Complementary Information. The event is not unrelated as it directly involves AI-generated content causing measurable societal impact.[AI generated]
AI principles
FairnessDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

AI Summaries Shift Political Views Toward Progressivism

2026-03-05
Chosun.com
Why's our monitor labelling this an incident or hazard?
An AI system (GPT-4o) is explicitly involved, generating summaries that influence political opinions. This influence constitutes indirect harm to communities by shaping societal perceptions and potentially biasing information, which aligns with harm category (d). Since the harm is occurring (opinion shifts measured) but is subtle and indirect, this qualifies as an AI Incident rather than a hazard. The article does not describe a response or governance action, so it is not Complementary Information. The event is not unrelated as it directly involves AI-generated content causing measurable societal impact.
Thumbnail Image

AI biases can influence people's perception of history

2026-03-03
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) whose outputs influence human perceptions and opinions. Although the study does not report actual harm occurring, it identifies a credible risk that AI biases could shape public understanding and political attitudes, which could plausibly lead to harm to communities or violations of rights in the future. Therefore, this qualifies as an AI Hazard because it describes a plausible future harm stemming from AI system use, rather than a realized incident or a mere complementary information update.
Thumbnail Image

AI Biases Shape Public View of History

2026-03-03
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) and their latent biases affecting public opinion, which is a recognized concern. However, the article presents research findings rather than an actual incident of harm or a plausible immediate hazard. There is no indication that the AI system's use has directly or indirectly caused harm as defined by the framework. The content serves to inform and contextualize AI's societal effects, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Chatbots' Subtle Bias: Steering Opinions Unintentionally

2026-03-03
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (large language models powering chatbots) whose use has directly led to a subtle but measurable influence on people's social and political opinions. This influence arises from latent biases in the AI's training data, which affects the framing of historical narratives. Such influence on opinions can be considered harm to communities, as it affects societal discourse and individual viewpoints without transparency or user awareness. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a harm (opinion manipulation) as defined in the framework.
Thumbnail Image

AI's hidden bias: Chatbots can influence opinions without trying

2026-03-03
YaleNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (large language models powering chatbots) whose use (providing factual summaries) has directly led to a measurable influence on users' political opinions. This influence is a form of harm to communities as it affects social and political attitudes, potentially impacting democratic discourse and public opinion formation. The bias is latent and unintentional but still causes real-world effects. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (opinion influence) as defined in the framework.
Thumbnail Image

How AI Biases Shape Our Understanding of History

2026-03-03
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (GPT-4o, a large language model) generating historical narratives with latent political biases. The research demonstrates that these biases influence public opinion, which constitutes harm to communities by skewing collective memory and potentially entrenching polarization. This meets the definition of an AI Incident because the AI system's use has directly led to harm (harm to communities through biased information and influence on public opinion). The harm is realized, not merely potential, as the study shows measurable shifts in participant opinions caused by AI-generated content. Thus, the event is best classified as an AI Incident.