AI-Generated Deepfakes Used to Impersonate Doctor and Promote Illegal Medicines in Brazil

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A criminal group in Brazil used AI to clone the voice and image of renowned doctor Drauzio Varella, creating deepfake videos to promote unapproved and illegal medicines on social media. Authorities conducted raids in Itapema, targeting the scheme, which posed risks to public health and damaged the doctor's reputation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to create deepfake videos and audios impersonating a trusted medical professional to promote unapproved and illegal medicines directly endangers public health and violates regulatory laws. The AI system's misuse led to misinformation and potential physical harm to consumers, fulfilling the criteria for an AI Incident under harm to health and violation of applicable law. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Healthcare, drugs, and biotechnologyMedia, social platforms, and marketing

Affected stakeholders
WorkersGeneral public

Harm types
ReputationalPhysical (injury)

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Grupo usava IA para imitar Drauzio Varella e vender remédios ilegais em Itapema - Visor Notícias

2026-04-22
Visor Notícias
Why's our monitor labelling this an incident or hazard?
The use of AI to create deepfake videos and audios impersonating a trusted medical professional to promote unapproved and illegal medicines directly endangers public health and violates regulatory laws. The AI system's misuse led to misinformation and potential physical harm to consumers, fulfilling the criteria for an AI Incident under harm to health and violation of applicable law. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Grupo que usou IA para 'clonar' Drauzio Varella e vender remédios é alvo de operação em SC

2026-04-22
ND
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to imitate the voice and manipulate the image of Drauzio Varella to promote unapproved medications, which is a direct misuse of AI technology causing harm. The harm includes misleading the public, risking health due to unregistered drugs, and damaging the professional reputation of the doctor. These factors meet the criteria for an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Gaeco de SC apoia operação que investiga uso de IA em fraudes com falsos tratamentos de saúde

2026-04-22
OCP News | As melhores notícias e histórias de Santa Catarina
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create manipulated audiovisual content (deepfakes) that directly harms a person's professional reputation and indirectly harms public health by promoting unapproved treatments. These harms fall under violations of rights and harm to health. The AI system's use is central to the incident, as it enables the fraudulent and harmful dissemination of false health information. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Golpistas usam IA para "clonar" Drauzio Varella e vender remédios ilegais

2026-04-22
Top Elegance
Why's our monitor labelling this an incident or hazard?
The use of AI to clone a person's voice and image to promote unapproved medicines constitutes direct misuse of AI technology causing harm to public health and violating rights (reputation and possibly consumer protection laws). The AI system's outputs were instrumental in the fraudulent scheme, leading to realized harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Operação mira golpes com IA que imitam voz e rosto de "médico" para vender produtos ilegais - Jornal Razão

2026-04-22
Jornal Razão
Why's our monitor labelling this an incident or hazard?
The use of AI to create fake videos impersonating a medical professional to sell unregistered products directly involves an AI system's use leading to potential harm to public health and violation of rights (misleading consumers and damaging professional credibility). Although the investigation is ongoing and the full extent of harm is not detailed, the described use of AI-generated deepfakes to promote illegal products constitutes an AI Incident due to the realized or highly probable harm to health and rights.
Thumbnail Image

GAECO atua em apoio à Operação Double Check, do GAECO do MPSP, voltada à investigação de crimes de falsidade ideológica e contra a saúde pública na internet

2026-04-23
Jornal do Comércio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for voice and image manipulation, which are AI systems generating deceptive content. This use has caused harm to individuals' reputations and poses a risk to public health due to the promotion of unapproved medicines. These harms fall under injury or harm to health and harm to communities, fulfilling the criteria for an AI Incident. Therefore, this event is classified as an AI Incident.