
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Studies led by Yale researchers show that large language models like GPT-4o, used in AI chatbots, unintentionally introduce political biases into historical summaries. These biases subtly influence users' social and political opinions, shifting public perception and potentially affecting democratic discourse in the United States.[AI generated]
Why's our monitor labelling this an incident or hazard?
An AI system (GPT-4o) is explicitly involved, generating summaries that influence political opinions. This influence constitutes indirect harm to communities by shaping societal perceptions and potentially biasing information, which aligns with harm category (d). Since the harm is occurring (opinion shifts measured) but is subtle and indirect, this qualifies as an AI Incident rather than a hazard. The article does not describe a response or governance action, so it is not Complementary Information. The event is not unrelated as it directly involves AI-generated content causing measurable societal impact.[AI generated]