Meta Deletes 10 Million AI-Generated Facebook Accounts in Crackdown on Spam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta has deleted around 10 million Facebook accounts in early 2025, targeting AI-generated and impersonator accounts that spread spam and degrade user experience. The company used AI-based detection to identify and remove these accounts, aiming to improve authenticity and reward original content creators.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI technology both in the generation of spammy content and in Meta's detection and suppression of duplicate or AI-assisted low-quality content. The deletion of accounts is a direct response to AI-generated harmful content flooding the platform, which harms the community by spreading misinformation and degrading user experience. Since the AI system's use has directly led to harm to communities through spam and impersonation, and Meta's AI-based countermeasures are part of the response, this qualifies as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityAccountabilityFairnessPrivacy & data governanceRobustness & digital securitySafetyRespect of human rightsHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityIT infrastructure and hosting

Harm types
Economic/PropertyReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality controlICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Meta confirms it has deleted 10 million Facebook accounts, here's why

2025-07-15
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology both in the generation of spammy content and in Meta's detection and suppression of duplicate or AI-assisted low-quality content. The deletion of accounts is a direct response to AI-generated harmful content flooding the platform, which harms the community by spreading misinformation and degrading user experience. Since the AI system's use has directly led to harm to communities through spam and impersonation, and Meta's AI-based countermeasures are part of the response, this qualifies as an AI Incident.
Thumbnail Image

Meta Removes 10 Million Fake Profiles as It Cracks Down on Spam

2025-07-15
Markets Insider
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to detect and reduce spam and fake profiles, which is a use of AI in content moderation. However, the article focuses on the company's proactive measures to prevent harm rather than describing an actual incident of harm caused by AI. There is no direct or indirect harm reported, only efforts to mitigate potential issues. Therefore, this is best classified as Complementary Information, providing context on societal and technical responses to AI-related challenges in social media content management.
Thumbnail Image

Meta Cracks Down On AI-Generated Facebook Spam

2025-07-15
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems both as the source of harm (AI-generated spam content flooding the platform and harming content creators' ability to have their original work recognized and monetized) and as tools for moderation (AI-powered content moderation). The harm is realized and ongoing, as content creators are negatively impacted by AI-generated spam drowning out original content, which constitutes harm to communities and violation of intellectual property rights. Therefore, this qualifies as an AI Incident. The article also discusses Meta's mitigation efforts, but the primary focus is on the harm caused by AI-generated spam and its impact on creators, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Meta cracks down on fake, repetitive content to protect creators - Daily Times

2025-07-15
Daily Times
Why's our monitor labelling this an incident or hazard?
The article discusses Meta's policy updates and enforcement actions to combat unoriginal and AI-generated content, which is a governance response to existing challenges on the platform. While AI-generated content is mentioned, the article does not describe any specific AI Incident (harm caused) or AI Hazard (plausible future harm). Instead, it focuses on measures to protect creators and improve content attribution, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI-related issues.
Thumbnail Image

Ammonnews : Meta removes 10 million Facebook profiles in effort to combat spam

2025-07-16
وكاله عمون الاخباريه
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for content creation and detection of duplicate or spammy content. The removal of profiles engaged in AI-generated spam directly addresses harm to communities by reducing misinformation and spam, which aligns with harm category (d) - harm to communities. Since the AI system's use and misuse have led to realized harm through the spread of spam and inauthentic content, this qualifies as an AI Incident. The article describes concrete actions taken to mitigate ongoing harm rather than just potential future harm or general AI-related news, so it is not a hazard or complementary information.
Thumbnail Image

فیس بُک نے ایک کروڑ جعلی پروفائلز ڈیلیٹ کردیں

2025-07-16
ایکسپریس اردو
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to detect and remove fake profiles and copied content, which directly addresses harm to online communities by reducing misinformation, impersonation, and spam. Since the AI system's use has led to the removal of harmful fake accounts and content, this constitutes an AI Incident under the definition of harm to communities. The harm is realized and the AI system's role is pivotal in mitigating it, thus this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

فیس بک کی اے آئی ٹولز کی مد دسے کارروائی ایک کروڑ جعلی پروفائلز ڈیلیٹ

2025-07-16
Daily Pakistan
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI tools by Facebook to identify and remove fake profiles and duplicated content, which directly addresses harm to communities by preventing misinformation and impersonation. Since the AI system's use has directly led to harm reduction, this qualifies as an AI Incident. The harm here is the presence and activity of fake profiles causing misinformation and impersonation, which the AI system helped to mitigate. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

فیس بک پر دوسروں کا مواد چوری کر کے شیئر کرنے والوں کے لیے بری خبر میٹا نے سخت پالیسی نافذ کرنے کا اعلان کر دیا

2025-07-17
Nai Baat
Why's our monitor labelling this an incident or hazard?
The article describes Meta's use of an AI system capable of recognizing copied videos to enforce content sharing policies. However, the event does not describe any realized harm or incident caused by the AI system; rather, it is about a new policy and AI system deployment to prevent potential harm (copyright infringement and spam). Therefore, it is complementary information about governance and mitigation measures related to AI use, not an incident or hazard.
Thumbnail Image

Meta removeu 10 milhões de perfis do Facebook ligados a spam - Tek Notícias

2025-07-15
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is used to generate mass spam content on Facebook, which harms the community by flooding the platform with low-quality, repetitive content. Meta's removal of 10 million profiles linked to such AI-generated spam indicates that harm has already occurred. The harm is to the community and to the rights of genuine content creators, as spam and impersonation diminish their visibility and opportunities. The AI system's use in producing spam content is a direct contributing factor to this harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Facebook também vai barrar conteúdo inautêntico criado por IA * Tecnoblog

2025-07-15
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the sense that AI-generated content is part of the low-quality content targeted by Meta's new rules. However, the article focuses on Meta's policy changes and enforcement measures to combat such content, which is a governance and societal response rather than a new AI Incident or AI Hazard. There is no report of realized harm or a specific event where AI caused harm. Therefore, this is best classified as Complementary Information, providing context on societal and platform governance responses to AI-related content issues.
Thumbnail Image

Kiderült: 10 millió Facebook-profit távolítottak el

2025-07-14
Portfolio.hu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to detect and limit spam and AI-generated fake content, which is a positive application of AI to reduce harm. There is no direct or indirect harm caused by AI malfunction or misuse described. The article focuses on the company's response and investment in AI infrastructure to improve platform integrity. This fits the definition of Complementary Information as it provides context and updates on AI-related governance and operational measures rather than reporting a new harm or plausible future harm.