AI-Powered Social Media Bots Threaten Privacy and Spread Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-driven bot accounts on social media are increasingly collecting and analyzing users' personal data, mimicking identities, and spreading misinformation. Experts warn these bots pose significant risks to individual privacy, societal trust, and can manipulate public opinion, urging users to remain vigilant and report suspicious activity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly described as AI-supported bot accounts on social media that analyze personal data and impersonate users. The harms described include privacy violations, misinformation dissemination, and manipulation of societal and political processes, which are direct harms to communities and individuals. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to significant harms including violations of rights and harm to communities through misinformation and manipulation.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityDemocracy & human autonomyRespect of human rightsAccountabilityRobustness & digital securityFairnessHuman wellbeingSafety

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI incident

AI system task:
Content generationInteraction support/chatbotsOrganisation/recommendersGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Bot hesaplar tehlike saçıyor! Sosyal medya kullanıcılarının verileri tek tek depolanıyor

2025-07-13
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-supported bot accounts on social media that analyze personal data and impersonate users. The harms described include privacy violations, misinformation dissemination, and manipulation of societal and political processes, which are direct harms to communities and individuals. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to significant harms including violations of rights and harm to communities through misinformation and manipulation.
Thumbnail Image

Bot hesaplar toplumu nasıl yönlendiriyor?

2025-07-13
Memurlar.Net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (bot accounts using AI for personality analysis and mimicry) whose use has directly led to harms including misinformation dissemination, manipulation of societal perceptions, and threats to privacy and security. These harms affect communities and violate rights related to information integrity and privacy. Since the harms are occurring and the AI systems are central to these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sosyal medyada kullanıcılarını takibe alan bot hesaplar kişisel verileri depoluyor

2025-07-13
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly refers to AI-driven bot accounts on social media that collect and analyze personal data and are used for manipulation and misinformation campaigns. While it does not describe a specific incident of harm occurring, it clearly outlines the plausible risks and threats these AI systems pose to individuals and societies. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to harms such as violations of privacy, misinformation, and social disruption. There is no description of a specific incident causing realized harm, so it is not an AI Incident. The article is more than general AI news or product updates, so it is not Unrelated or merely Complementary Information.
Thumbnail Image

Sosyal medyadaki bot hesaplar tehlikesi

2025-07-13
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems in the form of social media bots that use AI to imitate users and manipulate social media dynamics. While no direct harm is reported, the expert warns about the plausible risks these AI systems pose to privacy, societal trust, and information integrity. The discussion centers on potential harms that could arise from the development and use of these AI bots, fitting the definition of an AI Hazard. There is no report of an actual incident or realized harm, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the credible threat posed by AI bots.
Thumbnail Image

Sosyal medyada bot hesaplara dikkat: Kişisel verileri depoluyor; bu tür hesaplardan şüphelenmek gerekiyor

2025-07-13
YENİ ASYA - Gerçekten haber verir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-driven bot accounts that perform sophisticated analysis and mimicry of users, indicating the involvement of AI systems. Although it does not report a specific incident of harm, it warns about the credible and significant risks these bots pose, including privacy breaches, misinformation, and manipulation of social and political dynamics. These risks align with potential harms to individuals and communities as defined in the framework. Since the harms are plausible and the AI systems' role is central, but no realized harm is described, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Bot hesaplar kişisel verilerinizi topluyor

2025-07-16
Ak�am
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (bot accounts using AI to analyze data and mimic users) whose use has directly led to harms including privacy violations, misinformation, and manipulation of public opinion. The article provides concrete examples of these harms occurring in real-world contexts such as elections and conflicts. Therefore, this qualifies as an AI Incident due to realized harm caused by AI system use.