Meta Sues Over Deepfake-Driven Health Fraud in Brazil

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta has filed lawsuits against individuals and companies in Brazil for using AI-generated deepfakes of celebrities and doctors in fraudulent health product ads on its platforms. The deepfakes misled users, resulting in financial and privacy harm. Legal actions also target similar schemes in China and Vietnam.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (deepfake technology) used to create fraudulent content that has directly led to harm by deceiving users and promoting fraudulent products, which constitutes harm to communities and violations of rights. Meta's legal actions are responses to these harms. Since the harms have already occurred due to the use of AI-generated deepfakes, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityPrivacy & data governance

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Meta toma acciones legales por deepfakes en Brasil y China que suplantan celebridades

2026-02-27
andina.pe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) used to create fraudulent content that has directly led to harm by deceiving users and promoting fraudulent products, which constitutes harm to communities and violations of rights. Meta's legal actions are responses to these harms. Since the harms have already occurred due to the use of AI-generated deepfakes, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta porte plainte pour des arnaques aux deepfakes au Brésil et en Chine

2026-02-27
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create fraudulent content that directly leads to harm by deceiving people and promoting scams. The AI system's use in generating fake celebrity images and voices for fraudulent advertising has caused realized harm, including financial and reputational damage, which fits the definition of an AI Incident. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in enabling the fraud.
Thumbnail Image

Usurpation d'identité de célébrités et arnaques : Meta porte plainte pour des deepfakes au Brésil et en Chine

2026-02-27
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used maliciously to create fake celebrity content for scams, which constitutes harm to communities and individuals through fraud. The AI system's use directly leads to realized harm (scams and identity usurpation). Therefore, this qualifies as an AI Incident under the framework, as the AI system's misuse has directly led to harm.
Thumbnail Image

Meta processa brasileiros por uso de deepfakes em anúncios falsos de produtos de saúde - Jornal de Brasília

2026-02-27
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated content that directly causes harm to individuals and communities by promoting false health products, which constitutes a violation of rights and causes financial and reputational harm. The harm is realized, not just potential, and the AI system's use is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta processa grupos no Brasil e China por golpes com famosos

2026-02-26
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems through the use of deepfake technology, which is a form of AI-generated manipulated content. These deepfakes were used in fraudulent advertisements to deceive users into scams, directly causing harm to individuals and communities. This meets the criteria for an AI Incident as the AI system's use has directly led to harm. The mention of Meta's facial recognition system and legal actions are complementary but do not overshadow the primary incident of harm caused by AI misuse.
Thumbnail Image

Meta processa brasileiros por uso de deepfakes em anúncios falsos de produtos de saúde

2026-02-27
O TEMPO
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated videos and images that are used in fraudulent advertisements. This use of AI has directly led to harm: deception of consumers, violation of rights of individuals whose images and voices were manipulated, and potential health harm from false health product claims. The legal actions and platform responses confirm the harm has materialized. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta apresenta queixa contra fraudes com deepfakes no Brasil e China

2026-02-27
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems through the use of deepfake technology, which is an AI system generating synthetic voice and image content. The fraudulent use of these AI-generated deepfakes has directly led to harm by deceiving people into scams and unauthorized advertising, fulfilling the criteria for an AI Incident. The harm includes violations of rights (fraud, deception), harm to communities (public health fraud), and potential financial harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Meta porte plainte pour des arnaques aux deepfakes visant des célébrités au Brésil et en Chine

2026-02-27
7sur7
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used maliciously to create hyperrealistic fake content that deceives users and causes financial harm (scams). The harm is realized as users are being defrauded through these AI-generated impersonations. Therefore, this qualifies as an AI Incident due to the direct link between AI misuse and harm to individuals (harm to communities and property through fraud).
Thumbnail Image

Meta anuncia processos contra deepfakes no Brasil e na China

2026-02-27
CartaCapital
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media, and their use here is explicitly linked to fraudulent schemes causing harm to individuals and public health. The article details actual harm caused by these AI systems, including scams and deception, which fits the definition of an AI Incident. The legal actions are responses to these harms but do not change the classification of the event as an incident rather than complementary information.
Thumbnail Image

Meta porte plainte au Brésil et en Chine pour des deepfakes de célébrités

2026-02-27
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create deepfake videos and audio impersonating celebrities to commit fraud, which directly harms consumers and internet users by misleading them and promoting unauthorized products. This constitutes a violation of rights and causes harm to communities through deception and fraud. Since the AI system's misuse has directly led to realized harm, this qualifies as an AI Incident.
Thumbnail Image

Meta demanda a estafadores por usar "deepfakes" de celebridades en sus plataformas en Brasil y China

2026-02-27
El Economista
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used maliciously to impersonate celebrities and deceive people, resulting in realized harm such as fraud and public health risks. Meta's lawsuits target these fraudulent uses of AI-generated content, indicating that the AI system's use has directly led to harm. This fits the definition of an AI Incident, as the AI system's use has caused violations of rights and harm to communities. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by AI misuse.
Thumbnail Image

Meta processa golpistas por deepfake de Drauzio Varella e outros famosos * Tecnoblog

2026-02-26
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is an AI system generating synthetic media. The deepfakes were used maliciously to promote fraudulent health products, directly leading to harm by misleading consumers and potentially causing health risks. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights. The legal actions and platform responses are complementary information but the core event is an AI Incident due to realized harm from AI misuse.
Thumbnail Image

Meta lanza acciones legales por deepfakes que engañan a usuarios en Brasil y China - Tecnología - ABC Color

2026-02-27
ABC Digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfakes created with AI) used maliciously to impersonate public figures and deceive users, causing direct harm through scams and misinformation. Meta's legal actions respond to these harms, which have already occurred, fulfilling the criteria for an AI Incident. The AI system's use is central to the harm, as the deepfakes enable the fraudulent schemes. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta porte plainte pour des arnaques aux deepfakes au Brésil et en Chine

2026-02-27
Europe 1
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes to impersonate celebrities and promote fraudulent schemes, which have caused actual harm to people through scams. The AI system's use in creating hyper-realistic fake content is central to the fraudulent activity. Since harm has occurred due to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes : Meta porte plainte contre des escrocs au Brésil et en Chine

2026-02-27
Boursier.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create hyper-realistic fake images and voices of celebrities to perpetrate scams on Meta's platforms. These scams have caused direct harm to users by tricking them into sharing personal information or sending money, which constitutes harm to individuals and communities. The AI system's use in generating these deepfakes is central to the incident. Meta's legal actions are a response to this realized harm. Hence, the event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Meta vai à Justiça contra brasileiros por deepfakes em anúncios com 'produtos fraudulentos' em saúde

2026-02-26
O Globo
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit through the use of deepfake technology, which is an AI system capable of generating realistic fake images and voices. The harm is realized as these deepfakes were used in fraudulent advertisements causing financial and privacy harm to individuals. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to communities through fraud.
Thumbnail Image

Meta en guerre contre les arnaques au deepfake : des poursuites lancées en Chine et au Brésil

2026-02-27
Senego.com - Actualité au Sénégal, toute actualité du jour
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create ultrarealistic fraudulent advertisements that deceive users and harm reputations. The harm is realized as users are tricked into fraudulent purchases and the reputations of celebrities and platforms are damaged. Meta's legal actions are a response to these harms caused by the AI system's misuse. Hence, this is an AI Incident involving the use and misuse of an AI system leading to direct harm.
Thumbnail Image

Meta processa brasileiros por deepfakes em anúncios falsos - 26/02/2026 - Tec - Folha

2026-02-26
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is an AI system capable of generating realistic manipulated images and voices. The fraudulent ads caused direct harm by misleading consumers into purchasing fake health products and falling victim to scams, which is a clear violation of rights and harm to communities. The involvement of AI-generated content as a tool for deception and fraud meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

Meta anuncia processos contra deepfakes no Brasil e na China

2026-02-27
UOL notícias
Why's our monitor labelling this an incident or hazard?
The use of AI-generated deepfakes to impersonate celebrities and brands for fraudulent purposes directly leads to harm to people by deceiving them and potentially causing financial or reputational damage. Since the AI system's use has directly led to harm (fraud and deception), this qualifies as an AI Incident. The announcement of legal actions is a response to this harm but does not change the classification of the event as an incident.
Thumbnail Image

Meta apresenta queixa contra fraudes com deepfakes no Brasil e China

2026-02-27
ECO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes (an AI system) to create fraudulent advertisements that mislead and defraud people, causing harm. The harm includes deception leading to financial fraud and unauthorized health product promotion, which are direct harms to individuals and communities. Meta's legal actions are responses to these harms but do not negate the fact that the AI system's misuse has already caused harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta anuncia processos contra deepfakes no Brasil

2026-02-27
Home
Why's our monitor labelling this an incident or hazard?
The use of AI-generated deepfakes to impersonate celebrities and promote fraudulent products or scams directly leads to harm by deceiving and defrauding individuals, which fits the definition of an AI Incident. The article details realized harms caused by the AI system's outputs (deepfakes) and Meta's response through legal action. Therefore, this event is classified as an AI Incident due to the direct harm caused by the AI system's use in fraud and misinformation.
Thumbnail Image

Meta porte plainte pour des arnaques aux deepfakes au Brésil et en Chine

2026-02-27
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake content that impersonates celebrities to scam people, which is a direct violation of rights and causes harm to communities through fraud. The AI system's use in generating deceptive content that leads to scams and misinformation fits the definition of an AI Incident, as harm has already occurred due to the fraudulent activities enabled by AI-generated deepfakes.
Thumbnail Image

Meta apresenta queixa contra fraudes com 'deepfakes' no Brasil e China

2026-02-27
Executive Digest
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create realistic fake content that was used to deceive and defraud people, which constitutes harm to individuals and communities. The fraudulent use of AI-generated deepfakes directly led to realized harm through scams and misinformation. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through fraudulent activities.
Thumbnail Image

Meta processa brasileiros por deepfakes em produtos de saúde

2026-02-27
O Antagonista
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake technology) used maliciously to create false advertisements that have caused harm to individuals and communities by spreading misinformation and potentially harmful health claims. The harm includes violation of rights (use of images without consent), deception leading to possible health and financial harm, and damage to public trust. The direct link between the AI system's use and the harm is clear, fulfilling the criteria for an AI Incident. The legal actions and platform measures are responses but do not change the classification of the event as an incident.
Thumbnail Image

Meta processa por deepfakes no Brasil e na China

2026-02-27
Portal Tela
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfakes, which are AI-generated synthetic media, to impersonate celebrities and promote fraudulent products and investment schemes. This misuse has directly caused harm by deceiving consumers and violating image rights, fulfilling the criteria for an AI Incident. The legal actions by Meta are responses to these harms, but the primary event is the realized harm caused by AI-generated deepfakes. Hence, the classification is AI Incident.
Thumbnail Image

Meta demanda por estafas con deepfakes

2026-02-27
AmericaMalls & Retail
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is an AI system generating hyperrealistic audiovisual content. The development and use of these AI systems have directly led to harms including financial fraud, health risks from unapproved products, and violations of personal rights through identity theft and manipulation. The article reports on actual incidents of harm and legal actions taken, thus qualifying as an AI Incident rather than a hazard or complementary information. The harms are realized and ongoing, not merely potential.
Thumbnail Image

Meta anuncia ações judiciais contra anunciantes por golpes com uso de deepfakes

2026-02-27
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (deepfakes) to create altered images and voices of celebrities to promote fraudulent products, which is a direct misuse of AI technology causing harm to consumers and violating rights. The harm is realized as these deepfakes were used in active scams and misleading advertising. Meta's legal actions and protective measures confirm the AI system's role in causing harm. Hence, this event qualifies as an AI Incident.
Thumbnail Image

'Uma gota d'água em um oceano de estelionato contra a saúde', diz Drauzio Varella sobre decisão da Meta

2026-02-26
Brasil 247
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system, to create fraudulent advertisements that deceive users, causing direct harm through scams and misinformation related to health products. The harm includes financial loss and potential health risks, which fall under harm to persons and communities. The AI system's use in generating manipulated content is central to the incident. Although Meta's legal action and platform responses are mentioned, the primary focus is on the ongoing harm caused by AI-generated deepfakes, making this an AI Incident rather than complementary information or a hazard.
Thumbnail Image

'Uma gota d'água em um oceano de estelionato contra a saúde', diz Drauzio sobre Meta ir à Justiça contra anúncios que usam sua imagem

2026-02-26
O Globo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes (an AI system) to create fraudulent advertisements that mislead users into scams, causing harm such as financial fraud and misinformation about health. The AI system's misuse directly leads to harm to individuals and communities, fulfilling the criteria for an AI Incident. The legal and technical responses by Meta further confirm the harm has occurred and is ongoing, rather than being a mere potential risk or complementary information.
Thumbnail Image

Meta vai à Justiça contra brasileiros por deepfakes em anúncios fraudulentos

2026-02-26
Extra Online
Why's our monitor labelling this an incident or hazard?
The use of deepfake AI technology to create manipulated images and voices of public figures in ads is explicitly mentioned. These AI-generated deepfakes are used to deceive users into clicking fraudulent ads, resulting in financial harm and privacy violations. This constitutes direct harm to people (harm to health or property through fraud) caused by the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through scams and fraud.
Thumbnail Image

'É uma migalha', diz Drauzio Varella sobre Meta ir à Justiça contra anúncios que usam sua imagem

2026-02-27
O Globo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (deepfake technology) to create manipulated images and voices of public figures for fraudulent advertising, which has directly led to harm (financial fraud, misinformation about health products). The involvement of AI in generating deceptive content that causes real harm to users fits the definition of an AI Incident. Although Meta's legal actions are a response, the primary focus is on the ongoing harm caused by AI misuse, not just the response itself.
Thumbnail Image

'Uma gota d'água em um oceano de estelionato contra a saúde', diz Drauzio sobre Meta ir à Justiça contra anúncios que usam sua imagem

2026-02-26
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology used to create manipulated images and voices of public figures to perpetrate fraud. The use of these AI-generated deepfakes has directly led to harm by facilitating scams that mislead users into sharing personal information and money, which constitutes harm to individuals and communities. Meta's legal and technical responses are reactions to an ongoing AI Incident involving realized harm. Hence, the classification is AI Incident.
Thumbnail Image

Meta processa brasileiros por uso de deepfakes em anúncios fraudulentos

2026-02-26
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to generate manipulated images and voices of public figures, which directly led to harm by facilitating fraud and financial losses to users. The AI system's use in creating deceptive content that caused harm to individuals and communities fits the definition of an AI Incident. The article details realized harm, not just potential harm, and the AI system's role is pivotal in enabling the fraudulent schemes.
Thumbnail Image

Meta entra com ações na Justiça contra fraudadores digitais

2026-02-27
VEJA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes and AI tools for detecting cloaking, both involving AI systems. The fraudulent use of these AI technologies has directly led to financial harm to consumers, including scams and unauthorized charges, fulfilling the criteria for harm to persons and communities. The involvement of AI in the development and use of these fraudulent schemes, as well as the harm caused, clearly classifies this as an AI Incident rather than a hazard or complementary information. The legal actions and technical measures are responses to an ongoing AI Incident.