Meta's AI Ad Systems Enable Widespread Scam and Illegal Ads, Generating Billions in Revenue

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Internal documents reveal Meta's AI-driven ad systems failed to block billions of scam and illegal ads on Facebook, Instagram, and WhatsApp, exposing users to fraud and prohibited products. The AI systems only block ads with 95% certainty of fraud, allowing many harmful ads and generating up to 10% of Meta's annual revenue.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves Meta's automated advertising systems, which are AI systems that predict and classify ads for risk. The harm includes exposure of users to scams, illegal products, and fraudulent content, which harms communities and violates user rights. The harm is realized, not just potential, as evidenced by regulatory investigations and user losses. Meta's internal documents show the AI system's role in enabling these harms through insufficient blocking thresholds and business decisions. Hence, this qualifies as an AI Incident due to direct and indirect harm caused by the AI system's use and malfunction (or underperformance).[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

Business function:
Marketing and advertisementMonitoring and quality control

AI system task:
Event/anomaly detection

In other databases

Articles about this incident or hazard

Thumbnail Image

Tak Meta zarabia na dezinformacji. Dotarli do dokumentów firmy

2025-11-06
Interia.pl - Biznes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves Meta's automated advertising systems, which are AI systems that predict and classify ads for risk. The harm includes exposure of users to scams, illegal products, and fraudulent content, which harms communities and violates user rights. The harm is realized, not just potential, as evidenced by regulatory investigations and user losses. Meta's internal documents show the AI system's role in enabling these harms through insufficient blocking thresholds and business decisions. Hence, this qualifies as an AI Incident due to direct and indirect harm caused by the AI system's use and malfunction (or underperformance).
Thumbnail Image

Miliardy inkasowane na reklamach oszustw. Istotne doniesienia ws. Mety

2025-11-06
rmf24.pl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI algorithms used by Meta to detect and block fraudulent ads, but these systems only block the most obvious cases, allowing many scam ads to reach users. This has resulted in actual harm to users through exposure to scams and illegal products, fulfilling the criteria for an AI Incident. The AI system's development and use in ad personalization and fraud detection are central to the incident, as their shortcomings have directly contributed to the harm. The scale of the problem and the financial incentives involved further confirm the systemic nature of the harm caused by AI system failures.
Thumbnail Image

Reuters: 10 proc. przychodów Mety pochodzi z reklam oszustw i zakazanych towarów

2025-11-06
wnp.pl
Why's our monitor labelling this an incident or hazard?
Meta's automated ad systems, which rely on AI for ad personalization and fraud detection, have been shown to inadequately prevent scam and prohibited product ads from reaching users, resulting in real harm through exposure to fraud and illegal content. The documents indicate that the AI systems' thresholds for blocking ads were set high, allowing many harmful ads to be shown, and that the personalization system may have increased exposure to such ads. This direct link between AI system use and harm to users and communities fits the definition of an AI Incident under the OECD framework.
Thumbnail Image

Meta zarabia krocie na fałszywych reklamach

2025-11-06
pb.pl
Why's our monitor labelling this an incident or hazard?
Meta uses automated AI-based systems to detect and manage advertisements, including fraudulent ones. The article reveals that these systems allow a significant volume of scam ads to be published, causing harm to users through deception and financial loss. The harm is realized and ongoing, as evidenced by lawsuits and regulatory actions. The AI systems' decisions on blocking or allowing ads are pivotal in the chain of events leading to harm. Hence, the event meets the criteria for an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities through the spread of fraudulent ads.
Thumbnail Image

Facebook utrzymuje się z oszustw. To aż 10,1% przychodów Meta

2025-11-06
telepolis.pl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Meta's use of automated systems to detect fraudulent ads, which are AI systems involved in ad screening and decision-making. The harm—widespread scams and illegal advertising causing financial and social damage—is occurring and directly linked to the AI system's operation and policies. The AI system's thresholds and tolerance for fraudulent ads, driven by financial considerations, have led to ongoing harm. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to communities and users. The article does not merely warn of potential harm but documents ongoing harm facilitated by AI systems.
Thumbnail Image

Facebook i Instagram pełne oszustw. Meta nie reaguje, bo to się opłaca

2025-11-07
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system that automatically detects and filters risky advertisements. The system's threshold for action and its operational constraints lead to the continued display of fraudulent ads, directly contributing to harm through successful scams. This meets the definition of an AI Incident because the AI system's use and design have directly and indirectly led to harm to communities and violations of user rights. The harm is realized and ongoing, not merely potential. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Brudne obroty właściciela Facebooka

2025-11-07
wGospodarce.pl
Why's our monitor labelling this an incident or hazard?
Meta's automated ad systems, which rely on AI for detection and personalization, have been shown to allow fraudulent ads to proliferate, leading to user exposure to scams and financial harm. The documents indicate that the AI system's thresholds and policies contributed to this harm by not blocking suspicious advertisers promptly. The harm is realized and significant, affecting users' financial security and trust, which fits the definition of an AI Incident involving violations of user rights and harm to communities. The event is not merely a potential risk or a complementary update but a report of ongoing harm linked to AI system use.
Thumbnail Image

Η Meta αρνείται ότι κατέβαζε πορνό για την εκπαίδευση της AI: Ήταν για προσωπική χρήση...

2025-11-03
Techgear.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of alleged training data usage, but the allegations are denied and unproven. There is no direct or indirect evidence of harm caused by AI system development or use. The article focuses on the legal dispute and claims rather than a confirmed AI incident or hazard. Therefore, this is best classified as Complementary Information, providing context on ongoing legal and societal responses related to AI and data use, without confirmed harm or plausible imminent harm from AI systems.
Thumbnail Image

Η Meta κερδίζει δισεκατομμύρια από διαφημίσεις απάτης στα Facebook, Instagram και WhatsApp

2025-11-07
Techgear.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as Meta's platforms use AI-driven ad targeting and content moderation algorithms. The harm is realized and ongoing: millions of users are exposed to fraudulent ads causing financial and psychological harm, which fits the definition of harm to communities and violations of rights. The AI system's role is pivotal in enabling the scale and precision of these scams and in the failure to effectively remove them due to company policies and system limitations. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Η Meta κερδίζει δισεκατομμύρια από διαφημίσεις απάτης στα Facebook, Instagram και WhatsApp

2025-11-07
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems because Meta's platforms rely on AI algorithms for ad targeting and content moderation. The widespread presence of fraudulent ads causing harm to users is a direct consequence of the AI systems' malfunction or inadequate control. The harm includes financial and trust damage to users and communities, fitting the definition of harm to people and communities. Since the harm is occurring and the AI systems are pivotal in enabling the fraudulent ads, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ميتا تقدر 10% من نسبة إيراداتها تأتى من الإعلانات الاحتيالية - اليوم السابع

2025-11-07
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions Meta's AI-based system for evaluating ad fraud likelihood, which is part of the ad delivery and monitoring process. The system's design and operational thresholds have directly led to the continued presence of fraudulent ads, causing harm to users who are exploited financially and through misinformation. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (fraud, exploitation) and violation of user rights. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the fraudulent ads to persist.
Thumbnail Image

تقرير.. ميتا تجني مليارات الدولارات من إعلانات احتيالية في منصاتها | البوابة التقنية

2025-11-07
البوابة العربية للأخبار التقنية
Why's our monitor labelling this an incident or hazard?
Meta's advertising platforms use AI systems for content moderation and ad targeting. The report reveals that these AI systems are not effectively preventing fraudulent ads, indirectly causing harm to consumers and communities through scams and illegal product promotion. The harm is realized and ongoing, as fraudulent ads have led to successful scams. Therefore, this qualifies as an AI Incident due to the AI system's malfunction or inadequate use contributing to violations and harm.
Thumbnail Image

عُشر إيرادات ميتا مصدرها احتيالي!

2025-11-07
MEO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used by Meta to evaluate and manage advertising campaigns, which directly leads to harm by allowing fraudulent ads to be shown to users, causing financial and possibly health-related harm through illegal gambling and banned medical products. The system's operational policy (only suspending ads at 95% certainty and otherwise increasing costs) creates a conflict of interest that results in ongoing harm. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of user rights and harm to communities through exposure to scams and illegal content. The harm is realized and ongoing, not just potential, and the AI system's role is pivotal in this harm.
Thumbnail Image

ميتا تواجه اتهامات بجني مليارات الدولارات من الإعلانات الاحتيالية والمنتجات غير القانونية

2025-11-10
الوفد
Why's our monitor labelling this an incident or hazard?
Meta's advertising platforms rely on AI systems for ad targeting, detection, and enforcement. The report reveals that despite using advanced AI algorithms to detect fraudulent ads, Meta has allowed a large volume of deceptive and illegal ads to run, causing financial harm to millions of users globally. The company's internal policies prioritize revenue over user safety, resulting in systemic harm. This meets the criteria for an AI Incident as the AI system's use and malfunction (or inadequate enforcement) have directly and indirectly caused harm to users and violated their rights.
Thumbnail Image

Αποκάλυψη Reuters: Τεράστια έσοδα για τη Meta από ψεύτικες διαφημίσεις

2025-11-08
newsbreak
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in Meta's ad personalization and detection processes. The harms described include exposure of billions of users to fraudulent ads, which is a clear harm to communities and individuals (harm category d). The AI system's failure to effectively detect and block these ads, combined with policies that allow suspicious advertisers to continue, directly and indirectly caused this harm. The scale and duration of the issue confirm it is a realized harm, not just a potential risk. Hence, this is an AI Incident.
Thumbnail Image

Πώς η Meta κερδίζει δισεκατομμύρια από fake διαφημίσεις - Η αποκάλυψη από το Reuters

2025-11-08
tothemaonline.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for ad personalization and fraud detection. These systems identify but do not effectively block fraudulent ads, allowing scams to proliferate and cause financial harm to users, which constitutes harm to individuals (a) and harm to communities (d). The AI system's outputs and design decisions directly contribute to the ongoing harm by enabling exposure to deceptive content and generating revenue from it. Therefore, this qualifies as an AI Incident due to realized harm linked to AI system use and malfunction (ineffective fraud prevention).
Thumbnail Image

Αποκάλυψη για τη Meta: Πώς η εταιρεία του Ζάκερμπεργκ κερδίζει δισεκατομμύρια από διαδικτυακές απάτες

2025-11-08
The TOC
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in Meta's ad personalization and targeting algorithms that facilitate the display of billions of scam ads daily, causing direct financial harm to users (harm to persons and communities). The harm is realized, not just potential, as users are being targeted and scammed. The company's internal documents reveal awareness and tolerance of this harm for financial and AI development reasons, confirming the AI system's role in the incident. This fits the definition of an AI Incident because the AI system's use directly leads to significant harm (fraud and deception) to people and communities.
Thumbnail Image

Reuters: Η Meta κερδίζει τεράστια ποσά από τις fake διαφημίσεις στις πλατφόρμες της

2025-11-08
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by Meta for personalized advertising and fraud detection. The harm is realized and ongoing, as billions of users have been exposed to fraudulent ads leading to financial and other harms. The AI systems' failure to adequately detect and block these ads, combined with business decisions to tolerate some level of fraud for revenue, constitutes an AI Incident under the framework. The harms include violations of user rights and harm to communities due to exposure to scams and deceptive content. Thus, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Πώς η Meta κερδίζει δισεκατομμύρια από fake διαφημίσεις - Η αποκάλυψη από το Reuters (βίντεο)

2025-11-08
ΠΟΛΙΤΗΣ
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered ad personalization and internal AI systems for fraud detection that are used to identify but not effectively block fraudulent ads, allowing them to persist and generate revenue. This use of AI systems directly contributes to harm by enabling scams and deceptive practices that financially and socially harm users. The harm is realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident under the framework, as the AI systems' use is pivotal in causing harm to communities and violating user rights.
Thumbnail Image

Reuters: Αποκάλυψη-σοκ για τη Meta

2025-11-08
ΕΛΕΥΘΕΡΟΣ ΤΥΠΟΣ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for ad personalization and risk assessment. These AI systems' operation directly leads to harm by enabling the spread of fraudulent advertisements that cause financial and other harms to users, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the documents indicate significant exposure to scams originating from these platforms. Therefore, this is classified as an AI Incident due to direct harm caused by the AI system's use and its role in facilitating harmful content dissemination.
Thumbnail Image

Meta: Δισεκατομμύρια ψεύτικες διαφημίσεις φέρνουν τεράστια κέρδη σε Facebook και Instagram

2025-11-08
Madata.GR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely the automated ad review and fraud detection algorithms used by Meta to monitor advertisements. These AI systems' decisions directly impact which ads are shown or blocked. The documents reveal that many fraudulent ads continue to be shown because the AI systems only block ads with very high confidence, allowing a large volume of scam ads to be displayed daily. This has led to direct harm to users exposed to scams and illegal products, fulfilling the criteria for harm to communities and violation of rights. The company's knowledge and tolerance of this situation further confirm the AI system's role in the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Αποκάλυψη του Reuters: Πώς η Meta αποκομίζει δισεκατομμύρια από ψεύτικες διαφημίσεις

2025-11-08
ekriti
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in Meta's ad personalization and fraud detection mechanisms. These AI systems are used in the development and deployment of ads, including fraudulent ones, and their malfunction or intentional tolerance leads to direct harm to users through scams and financial losses. The harm is realized and ongoing, with Meta profiting from these deceptive ads. This fits the definition of an AI Incident because the AI system's use directly leads to violations of rights and harm to communities. The event is not merely a potential risk or complementary information but a clear case of harm caused by AI system use.
Thumbnail Image

ΠΩΣ ΠΛΟΥΤΙΖΟΥΝ ΤΑ SOCIAL MEDIA ΣΕ ΒΑΡΟΣ ΜΑΣ ΑΠΟ ΤΙΣ ΔΙΑΔΙΚΤΥΑΚΕΣ ΑΠΑΤΕΣ

2025-11-07
NewsNowgr.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by Meta to target users with advertisements, including fraudulent ones. The AI system's use in promoting and failing to adequately filter scam ads has directly led to harm to users (financial losses) and communities. The harm is realized, not just potential, as evidenced by the scale of fraud passing through Meta's platforms. The company's policies and thresholds for removing ads indicate a malfunction or neglect in AI system use, contributing to the harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Reuters: Η Meta κερδίζει τεράστια ποσά από τις fake διαφημίσεις στις πλατφόρμες της - sofokleous10.gr

2025-11-08
sofokleous10.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's automated systems that detect and manage advertisements, which are AI systems by definition due to their automated, personalized ad targeting and fraud detection functions. The failure of these AI systems to effectively block fraudulent ads has directly exposed users to scams and financial harm, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves violations of user rights and harm to communities. The article also details internal company knowledge and decisions that contributed to the persistence of this harm, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Πώς η Meta κερδίζει δισεκατομμύρια από fake διαφημίσεις - Η αποκάλυψη από το Reuters - sofokleous10.gr

2025-11-08
sofokleous10.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used by Meta for ad personalization and fraud detection, which are integral to the operation and monetization of ads on its platforms. The internal documents reveal that Meta knowingly allows fraudulent ads to run unless there is very high certainty of fraud, effectively monetizing harmful content. This results in direct harm to users who are exposed to scams and deceptive products, fulfilling the criteria for harm to persons and communities. The AI system's role is pivotal in enabling this harm by targeting users and managing ad placements. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Πώς η Meta κερδίζει δισεκατομμύρια από fake διαφημίσεις - Η αποκάλυψη από το Reuters

2025-11-08
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems to manage and deliver advertisements. The revelation that 10% of revenue comes from ads associated with scams and misleading content indicates that the AI systems either fail to filter or inadvertently facilitate harmful content. This results in direct financial harm and deception to users, qualifying as harm to communities and individuals. Since the harm is realized and linked to the AI systems' use, this event qualifies as an AI Incident.
Thumbnail Image

Reuters: Πώς η Meta κερδίζει δισεκατομμύρια από fake διαφημίσεις

2025-11-08
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
Meta's advertising platform almost certainly employs AI systems for automated ad screening and content moderation. The internal documents reveal that despite knowing a significant portion of revenue came from fraudulent or illegal ads, Meta's AI systems failed to detect or limit these ads for years. This failure directly contributed to harm by allowing scams and illegal products to be promoted, impacting users and communities. The harm is realized and ongoing, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Πώς η Meta κερδίζει δισεκατομμύρια από fake διαφημίσεις | Cyprus Times

2025-11-08
cyprustimes.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for ad personalization and fraud detection. These systems are knowingly permitting fraudulent ads to run, directly leading to harm such as financial scams and user exploitation. The harm is realized and ongoing, fulfilling the criteria for an AI Incident. The article details how AI-enabled ad targeting and moderation systems contribute to the harm, and how Meta's business model profits from it. Therefore, this is an AI Incident due to direct harm caused by AI system use and company policies enabling it.
Thumbnail Image

Πώς η Meta κερδίζει δισεκατομμύρια από τις διαδικτυακές απάτες

2025-11-08
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to personalize and deliver advertisements, including fraudulent ones, to users. The AI system's use directly leads to harm (financial and informational) to users, fulfilling the criteria for an AI Incident. The harm includes violations of user rights and harm to communities through widespread scams. The article documents realized harm, not just potential risk, and shows Meta's internal decisions that exacerbate the issue. Hence, it is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

Δισεκατομμύρια ψεύτικες διαφημίσεις προβάλλονται καθημερινά σε Facebook και Instagram - Τα τεράστια έσοδα της Meta από τα scams

2025-11-08
News24world
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta to detect and manage advertisements on Facebook, Instagram, and WhatsApp. The failure of these AI systems to adequately identify and block fraudulent ads has directly led to harm, including financial losses and exposure to illegal products, which fits the definition of harm to communities and individuals. The internal documents show systemic issues in the AI system's operation and company policies that tolerate or insufficiently address these harms. Therefore, this is an AI Incident due to the realized harm caused by the AI system's use and malfunction.
Thumbnail Image

Πώς η Meta κερδίζει δισεκατομμύρια από fake διαφημίσεις - dete.gr

2025-11-08
dete | Eιδήσεις | Πάτρα | Δυτική Ελλάδα
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for ad personalization and fraud detection. These systems are knowingly permitting fraudulent ads to run, directly leading to harm such as financial scams and deception of users. The harm is realized and ongoing, fulfilling the criteria for an AI Incident. The article details how the AI system's use and the company's policies contribute to this harm, making it a direct cause of injury to people and harm to communities.
Thumbnail Image

Αποκάλυψη Reuters: Πώς η Meta κερδίζει δισεκατομμύρια από διαφημίσεις "απάτες" για ψεύτικα προϊόντα

2025-11-08
Ant1 Live
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's automated systems (AI systems) that manage ad placements and detect fraudulent ads. These systems failed to block a large volume of scam ads, exposing users to financial losses and scams, which constitutes harm to people and communities. The harm is realized and ongoing, with documented cases of users losing money due to these AI system failures. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly and indirectly led to significant harm.
Thumbnail Image

Το Reuters φέρνει στο φως ένα απίστευτο σκάνδαλο για τη Meta, η οποία προβάλλει στους χρήστες 15 δισεκατομμύρια διαφημίσεις "υψηλού κινδύνου" την ημέρα

2025-11-10
PCMag Greece
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta to manage and personalize online advertisements. These AI systems' operation directly leads to the dissemination of fraudulent ads, causing financial harm to users and communities. The harm is realized and ongoing, as evidenced by the billions of high-risk ads shown daily and the estimated billions in revenue from such ads. The AI's malfunction or design choices (e.g., high threshold for blocking) contribute to this harm. Hence, the event meets the criteria for an AI Incident, as the AI system's use has directly led to significant harm to people and communities through fraud.
Thumbnail Image

Meta: Διαφημιστικές απάτες απέφεραν έως 16 δισ. δολάρια ετησίως - BusinessNews.gr

2025-11-10
businessnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's use of automated systems, which reasonably include AI, to manage and display advertisements. These systems have allowed a large volume of fraudulent and deceptive ads to be shown daily, causing direct financial harm to users worldwide. The harm includes economic losses from scams and violations of consumer rights, fitting the definition of harm to communities and violations of rights. The company's internal policies prioritize revenue over fully eliminating these harmful ads, indicating the AI systems' outputs have directly or indirectly led to significant harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Here's how many billions Meta earned from ads that are trying to scam you

2025-11-11
The Star
Why's our monitor labelling this an incident or hazard?
The personalized ad system is an AI system that recommends ads based on user behavior. The presence of scam ads and the increased exposure to them due to the AI-driven personalized ad system directly leads to harm to users (harm to communities and individuals through scams). The event involves the use of AI in ad targeting and delivery, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident due to the direct or indirect role of AI in causing harm through scam ads.
Thumbnail Image

Leaked Meta documents predicted 10% of its revenue came from scam ads in 2024 - Which?

2025-11-12
Which?
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Meta's ad-personalisation system, which uses AI to deliver ads based on user interests. This AI system is directly involved in exposing billions of users to scam ads daily, leading to financial harm and violations of consumer rights. The harm is realized and ongoing, as users are being scammed and Meta profits from these ads. The AI system's role is central to the incident, as it facilitates the targeting and spread of fraudulent ads. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Explained: How Meta made billions from scam ads | - The Times of India

2025-11-13
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Meta's ad-personalisation engine, which uses AI to target users with ads based on their interactions. This AI system has amplified scam ads by showing users more of the same after clicking, thereby increasing exposure to fraudulent content. The resulting harm includes millions of users losing money to scams, which qualifies as injury or harm to people. The involvement of AI in the development and use of the ad targeting system, combined with the direct link to realized harm, classifies this as an AI Incident. Although Meta disputes some estimates, the harm is clearly described as occurring, not just potential.
Thumbnail Image

Consumer Reports calls on the FTC and state attorneys general to take action against Meta for its failure to mitigate harmful scam advertisements

2025-11-13
CR Advocacy
Why's our monitor labelling this an incident or hazard?
Meta's advertisement delivery system uses AI algorithms to target and deliver ads to users. The report states that Meta knowingly allowed billions of scam ads to be delivered daily, and its algorithms helped proliferate these harmful ads. This directly links the AI system's use to the harm caused by scams, which are significant injuries to consumers and violations of consumer protection laws. The failure to take reasonable steps to stop these harmful ads despite capacity to do so further supports classification as an AI Incident. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Meta must rein in scammers -- or face consequences

2025-11-14
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of algorithmic ad recommendation and AI-generated scam content (deepfakes). These systems' use and malfunction (or deliberate underperformance) have directly led to significant financial harm to users, including vulnerable groups, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, with billions lost globally to scams facilitated by AI-enhanced ads on Meta's platforms. The article details direct links between AI system use and harm, not just potential risks or responses, so it is classified as an AI Incident.
Thumbnail Image

Escándalo en Meta: documentos revelan que obtuvo ingresos por u$s7.000 millones de publicidad de productos prohibidos

2025-11-18
Ambito
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to detect fraudulent and illegal advertisements. The AI's detection threshold and sanctioning policy directly influence the presence of harmful ads on the platform. The continued allowance of such ads leads to harm to users and communities through exposure to scams and fraud, fulfilling the criteria for harm to communities and violation of rights. The AI system's use and malfunction (in terms of policy thresholds) are contributing factors to this harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Filtración explosiva: Meta habría lucrado con estafas y anuncios prohibidos

2025-11-18
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions automated systems used by Meta to detect fraudulent ads, which can be reasonably inferred as AI systems given their role in large-scale ad content analysis and personalization. The harm caused includes exposure to scams and fraudulent advertisements, which constitute harm to communities and individuals. The AI system's failure to effectively block these ads due to a high certainty threshold and the reinforcement of fraudulent content through personalization directly contributes to this harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use and malfunction have directly or indirectly led to significant harm.
Thumbnail Image

Debemos poder exigir rendición de cuentas a las plataformas

2025-11-19
Milenio.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of deepfake generation and AI-based ad targeting algorithms on Meta's platforms. These AI systems have directly led to harm (financial fraud, exploitation, and harm to communities) by enabling the spread of fraudulent content and failing to adequately prevent it. The harm is materialized and significant, including financial losses and human rights violations (trafficking and forced labor in scam centers). The article also discusses the AI systems' role in both the problem and partial mitigation, confirming the AI system's involvement in the incident. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Meta Genera Millones con Publicidad de Productos Ilegales | Sitios Argentina.

2025-11-21
SITIOS ARGENTINA - Portal de noticias y medios Argentinos.
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in Meta's ad platforms that manage and filter advertisements. The harm is direct and materialized, as users are exposed to fraudulent and illegal product ads, causing harm to communities and violating laws. The AI system's insufficient filtering and decision thresholds contribute to the harm by allowing these ads to be shown and generate revenue. This fits the definition of an AI Incident because the AI system's use has directly led to harm through the dissemination of illegal and harmful content.