AI-Driven Foreign Influence Campaigns Manipulate Social Media Ahead of 2024 US Election

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Foreign actors, including Russia, China, Iran, and Israel, have used generative AI and social bots to conduct coordinated influence campaigns on social media. These AI-powered operations spread disinformation, manipulate public opinion, and flood platforms with fake content, causing harm to communities and distorting public discourse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI systems, including generative AI and AI-generated social bots, to conduct coordinated inauthentic behavior on social media platforms. These AI systems are used to spread disinformation, scams, and manipulate public opinion, which are clear harms to communities and potentially human rights. The involvement of AI in the use phase (operation of bots and content generation) has directly led to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI systems' use has directly caused significant harm to communities through manipulation and misinformation.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securityRespect of human rightsDemocracy & human autonomyHuman wellbeingSafety

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
General public

Harm types
Public interestPsychologicalHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots

In other databases


Articles about this incident or hazard

Thumbnail Image

How foreign operations are manipulating social media to influence your views

2024-10-08
The Conversation
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems, including generative AI and AI-generated social bots, to conduct coordinated inauthentic behavior on social media platforms. These AI systems are used to spread disinformation, scams, and manipulate public opinion, which are clear harms to communities and potentially human rights. The involvement of AI in the use phase (operation of bots and content generation) has directly led to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI systems' use has directly caused significant harm to communities through manipulation and misinformation.
Thumbnail Image

How GenAI makes foreign influence campaigns on social media even worse

2024-10-09
Fast Company
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating fake profile pictures and managing fake accounts) used in foreign influence campaigns that have directly led to harms such as scams, spam dissemination, and manipulation of social media content. These harms affect communities and the information environment, fitting the definition of an AI Incident. The article provides concrete examples of realized harm rather than just potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

How foreign operations are manipulating social media to influence people's views

2024-10-08
Phys.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, ChatGPT, AI-generated faces) being used to create and manage fake social media accounts that spread scams, spam, and disinformation. This coordinated inauthentic behavior manipulates social media feeds and reduces the quality of information, which harms communities by distorting public discourse and potentially influencing opinions. The harm is realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities through manipulation and misinformation dissemination.
Thumbnail Image

Foreign operations manipulate social media to influence your views - UPI.com

2024-10-08
UPI
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI and social bots) used in foreign influence campaigns that have actively spread false narratives and manipulated public opinion, causing harm to communities. This is a direct harm caused by the use of AI systems in the development and execution of these campaigns. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.