AI Bots Deceive Social Media Users in Political Discourse, Study Finds

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers at the University of Notre Dame found that social media users struggle to distinguish AI bots from humans during political discussions, with participants misidentifying bots 58% of the time. This inability enables AI bots to spread misinformation, undermining public discourse and harming communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating content and operating bot networks that directly lead to harm by flooding social media with misleading, spammy, and manipulative content. This harms communities by degrading online conversations and potentially spreading disinformation. The AI involvement is explicit and central to the harm described. The harm is realized and ongoing, not merely potential. Hence, the classification as an AI Incident is appropriate.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomyAccountabilitySafetyRespect of human rightsHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Twitter is becoming a 'ghost town' of bots as AI content floods the internet

2024-02-27
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating content and operating bot networks that directly lead to harm by flooding social media with misleading, spammy, and manipulative content. This harms communities by degrading online conversations and potentially spreading disinformation. The AI involvement is explicit and central to the harm described. The harm is realized and ongoing, not merely potential. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Social Media Users Find It Hard to Identify AI Bots in Political Discussions, New Study Shows

2024-02-28
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLM-based bots) used in social media political discussions. The study shows these AI bots successfully spread misinformation, which is a harm to communities and the information ecosystem. The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through misinformation dissemination and user deception.
Thumbnail Image

AI among us: Social media users struggle to identify AI bots during political discourse

2024-02-27
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLM-based bots) used in social media political discourse, where their outputs are designed to spread misinformation and deceive users about their nature. The study shows that humans often fail to distinguish AI bots from humans, increasing the risk and actual occurrence of misinformation spread, which harms communities by undermining truthful information and political discourse. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation dissemination. The event is not merely a potential risk (hazard) or a complementary update but a documented instance of AI-generated misinformation impacting social discourse.
Thumbnail Image

Unlocking the secrets of social bots: Research sheds light on AI's role in spreading disinformation

2024-02-29
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (social bots using machine learning and deep learning) that are actively spreading disinformation, which is a form of harm to communities. The article describes the direct role of AI in causing this harm by influencing public opinion and manipulating markets. Since the harm is occurring and the AI system's involvement is explicit, this qualifies as an AI Incident under the framework. The article does not merely discuss potential risks or responses but documents an ongoing issue of AI-driven disinformation spread.
Thumbnail Image

Spot the bot: social media users struggle to tell humans from bots

2024-02-29
Institution of Engineering and Technology
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI bots powered by large language models engaging in political discussions and successfully deceiving human users, which leads to the spread of misinformation. This misinformation can harm communities by distorting public opinion and potentially influencing elections. The AI system's use is directly linked to this harm, fulfilling the criteria for an AI Incident involving harm to communities through misinformation dissemination.
Thumbnail Image

Research Unveils AI's Role in Disseminating Disinformation: Unlocking the Secrets of Social Bots

2024-03-01
India Education,Education News India,Education News | India Education Diary
Why's our monitor labelling this an incident or hazard?
The article focuses on a research study that investigates the role of AI-driven social bots in spreading disinformation, which is a recognized harm to communities and public discourse. However, the article itself does not report a specific AI Incident (i.e., a concrete event where harm has occurred) nor does it describe a new AI Hazard (a plausible future harm event). Instead, it provides complementary information by enhancing understanding of the AI ecosystem and the challenges posed by social bots, emphasizing the need for vigilance and improved detection methods. Therefore, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

People are using error-prone AI chatbots to help them migrate to Australia

2024-02-28
Crikey
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (custom GPT chatbots) being used to provide migration advice. These AI systems have given incorrect or incomplete information, which can cause real harm to users, including financial loss and legal consequences such as visa rejection or exclusion. The harm is realized or highly likely given the examples provided, such as incorrect advice about visa eligibility. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people (harm to individuals relying on migration advice).
Thumbnail Image

Social media users struggle to spot political AI bots

2024-02-27
Futurity
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLM-based AI bots) used in social media political discourse. The AI bots' use has directly led to harm by enabling the spread of misinformation and deceiving users about the nature of the accounts, which harms communities and societal trust. The study's findings confirm that the AI system's use is a contributing factor to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is occurring and demonstrated.