AI-Generated Survey Responses Undermine Public Opinion Polls in the US

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Dartmouth University study reveals that AI-powered language models can generate fake survey responses that evade detection, manipulate public opinion polls, and alter outcomes of major US election surveys. This manipulation threatens the integrity of democratic processes and scientific research by corrupting critical data at scale.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (a language model-based synthetic respondent) used to manipulate public opinion surveys by generating undetectable false responses. This use of AI directly leads to harm by corrupting data critical for democratic elections and scientific research, impacting communities and democratic accountability. The harm is realized, not just potential, as the study shows how AI-generated responses can drastically alter survey outcomes. Hence, it meets the criteria for an AI Incident due to direct harm caused by AI use.[AI generated]
AI principles
Democracy & human autonomyTransparency & explainabilityFairnessAccountabilityRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

La Inteligencia Artificial puede manipular encuestas públicas sin ser detectada, alerta estudio

2025-11-17
Diario El Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (a language model-based synthetic respondent) used to manipulate public opinion surveys by generating undetectable false responses. This use of AI directly leads to harm by corrupting data critical for democratic elections and scientific research, impacting communities and democratic accountability. The harm is realized, not just potential, as the study shows how AI-generated responses can drastically alter survey outcomes. Hence, it meets the criteria for an AI Incident due to direct harm caused by AI use.
Thumbnail Image

La IA puede hacerse pasar por personas en encuestas, según un estudio

2025-11-18
Euronews Español
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used to generate synthetic survey responses that deceive detection systems and skew survey results. This manipulation constitutes harm to communities by threatening democratic processes and scientific research integrity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the study demonstrates successful deception and quantifies the impact on election polling. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

La Inteligencia Artificial puede corromper las encuestas de opinión pública a gran escala

2025-11-17
El Periódico
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a synthetic autonomous respondent based on a large language model) to generate false survey answers that can change election-related survey results. This use of AI directly leads to harm by corrupting the integrity of public opinion surveys, which are essential for democratic governance and societal trust. The harm is realized or highly plausible given the demonstrated ability of AI to produce undetectable false responses at scale, potentially exploited by adversaries. Therefore, this qualifies as an AI Incident due to the direct link between AI use and harm to communities and democratic processes.
Thumbnail Image

La Inteligencia Artificial puede corromper las encuestas de opinión...

2025-11-17
Notimérica
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a synthetic autonomous survey respondent powered by large language models) that has been demonstrated to successfully manipulate opinion polls by generating fake responses that pass quality controls. This manipulation directly harms communities by corrupting democratic processes and scientific research, fulfilling the criteria for harm to communities and violation of trust in information. The harm is realized, not just potential, as the study shows how few AI-generated responses can change poll outcomes and that a significant portion of survey respondents already use AI to answer questions. Therefore, this qualifies as an AI Incident.
Thumbnail Image

La Inteligencia Artificial puede corromper las encuestas de opinión pública a gran escala

2025-11-17
Andalucía Información
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a large language model-based synthetic respondent) that has been demonstrated to produce fake survey responses indistinguishable from real human answers. This use has directly led to harm by corrupting the integrity of public opinion polls and scientific research data, which are critical for democratic processes and policy-making. The harm is realized, not just potential, as the study shows that even a small number of AI-generated responses can significantly alter poll outcomes. The article also notes that a significant portion of survey respondents already use AI to answer questions, indicating ongoing harm. Thus, the AI system's use has directly and indirectly caused harm to communities and the knowledge ecosystem, fitting the definition of an AI Incident.
Thumbnail Image

La Inteligencia Artificial tiene la capacidad de alterar encuestas de opinión pública a gran escala

2025-11-17
NoticiasDe.es
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (a synthetic autonomous respondent powered by large language models) used to generate fake survey responses that manipulate poll results. This use of AI has directly led to harm by corrupting public opinion data, which is critical for democratic decision-making and scientific research. The harm includes manipulation of election-related surveys and contamination of scientific data, which qualifies as harm to communities and a violation of trust in democratic processes. The AI system's role is pivotal in enabling this manipulation at scale and with high effectiveness, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La inteligencia artificial puede corromper las encuestas de opinión pública a gran escala

2025-11-18
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (large language models) used to generate synthetic survey responses that have already been shown to pass quality controls and alter survey results. This manipulation constitutes a violation of the integrity of democratic processes and scientific research, which falls under harm to communities and potentially breaches trust and rights related to accurate information. The harm is realized, not just potential, as the study demonstrates actual impact on survey outcomes and the presence of AI-generated responses in real surveys. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing significant harm through manipulation and misinformation.
Thumbnail Image

Cómo consigue la IA manipular las encuestas

2025-11-18
LaSexta
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (a synthetic autonomous respondent based on large language models) used to manipulate opinion surveys by generating fake but highly realistic responses. This use of AI directly leads to harm by corrupting data that informs elections, public policy, and scientific research, which qualifies as harm to communities and a violation of trust in democratic processes. Therefore, this event meets the criteria for an AI Incident because the AI's use has directly led to significant harm.
Thumbnail Image

La IA ya puede manipular encuestas sin ser detectada, según un estudio

2025-11-18
Agencia Sinc
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system developed to autonomously generate survey responses that mimic human behavior and evade detection. The use of this AI system has directly led to harm by corrupting public opinion surveys, which are critical for democratic decision-making and scientific research, thus constituting harm to communities and a breach of trust. The harm is realized, not just potential, as the study shows actual manipulation of survey results. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Los bots de IA pueden viciar encuestas electorales y científicas a gran escala

2025-11-18
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the synthetic respondent bot) that is used to manipulate survey results, which directly harms democratic processes and scientific research integrity. The harm is realized and ongoing, as indicated by the study's findings and the reported use of AI-generated responses in actual surveys. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to communities and the knowledge ecosystem. The event is not merely a potential risk or a governance response but a documented case of AI misuse causing harm.
Thumbnail Image

AI can impersonate humans in public opinion polls, study finds

2025-11-18
Euronews English
Why's our monitor labelling this an incident or hazard?
The study explicitly shows that AI-generated synthetic respondents can manipulate online surveys to the extent of flipping election predictions and poisoning scientific research data. The AI system's use here directly leads to harm by distorting public opinion measurement and potentially influencing democratic elections and research validity. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to communities and societal functions. Therefore, the event is classified as an AI Incident.
Thumbnail Image

This AI mimics humans so perfectly it can corrupt public opinion polls and surveys

2025-11-18
TechSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system designed to impersonate humans in online surveys and polls, successfully evading detection and potentially flipping poll outcomes. This manipulation can distort public opinion measurement and scientific research, which are essential for informed decision-making and democratic governance. The harm is realized and significant, as it undermines trust in democratic processes and scientific data. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Fake survey answers from AI could quietly sway election predictions

2025-11-17
Phys.org
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system designed to mimic real human survey responses and evade detection, which has been demonstrated to manipulate election polling outcomes and distort research data. The harm is realized and direct, as the AI-generated fake responses have already influenced major national polls and threaten the integrity of scientific research and democratic processes. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (undermining democratic processes) and violations of rights (democratic accountability and research integrity).
Thumbnail Image

How AI can rig polls

2025-11-17
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system developed and used to generate fake survey responses that can flip election poll results and poison scientific research data. The harm is realized and direct, as the AI system's outputs have already influenced or could influence public opinion and research outcomes, violating trust and potentially democratic accountability. The AI system's involvement is clear, and the harm includes manipulation of public opinion and scientific knowledge, which are significant harms to communities and societal functions. Hence, this is an AI Incident.
Thumbnail Image

The $0.05 AI Scam That Could Threaten Public Opinion Research

2025-11-18
Study Finds
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems are used to produce synthetic survey responses that pass quality controls and can bias poll results, directly undermining the reliability of public opinion research. This manipulation can distort democratic accountability and policy decisions, constituting harm to communities and violation of rights related to truthful information. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI bots 'can pass as humans in online political surveys'

2025-11-17
thetimes.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (bots) used to impersonate humans in political surveys, which is a misuse of AI technology. Although the article does not report actual harm occurring yet, the plausible risk of harm to communities and democratic processes through misinformation or manipulation of political data is credible. Therefore, this constitutes an AI Hazard rather than an AI Incident, as harm could plausibly result from this AI use but has not been explicitly stated as having occurred.
Thumbnail Image

A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On

2025-11-18
404 Media
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system designed to mimic human survey respondents with near-perfect evasion of detection, leading to the contamination of survey data. This contamination undermines the validity of scientific research and public opinion polling, which is a clear harm to communities and the knowledge ecosystem. The harm is realized, not just potential, as the paper shows how few AI-generated responses could flip major national polls. The AI system's development and use are central to this harm, fulfilling the criteria for an AI Incident. The event is not merely a warning or potential risk (AI Hazard), nor is it a response or update (Complementary Information), nor unrelated to AI harms.
Thumbnail Image

AI can corrupt opinion polls, sway election results

2025-11-18
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (an autonomous synthetic respondent) that has been used to manipulate opinion polls, which are critical to democratic processes. The AI's ability to produce realistic, tailored responses that can flip poll outcomes demonstrates direct involvement in causing harm to communities by corrupting election-related information and potentially influencing election results. This harm aligns with violations of rights and harm to communities as defined in the framework. Hence, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dartmouth study exposes how 5-cent AI bots can flip election polls undetected - NaturalNews.com

2025-11-20
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating synthetic survey responses that have been shown to pass fraud detection and can flip election poll results. This constitutes direct use of AI leading to harm in the form of misinformation and manipulation of democratic processes and scientific research, which are harms to communities and violations of the integrity of democratic rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.