AI-Generated Voice Used in Scam Targeting Drica Moraes' Contacts

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals cloned Brazilian actress Drica Moraes' phone and used AI to generate fake voice messages, impersonating her to scam her contacts via WhatsApp. The AI-enabled impersonation led to fraudulent requests for money and personal information, prompting Moraes to publicly warn her followers about the ongoing scam.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to generate a fake voice message impersonating a person constitutes the use of an AI system in a malicious way that directly leads to harm (fraud, deception) to individuals (friends and family of the victim). The cloning of the phone and the AI-generated voice message together caused realized harm through attempted fraud and emotional distress. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Digital security

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Drica Moraes denuncia ter sido vítima de golpe com IA: "Não caiam nessa"; entenda - Revista Fórum

2026-04-06
Revista Fórum
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice message impersonating a person constitutes the use of an AI system in a malicious way that directly leads to harm (fraud, deception) to individuals (friends and family of the victim). The cloning of the phone and the AI-generated voice message together caused realized harm through attempted fraud and emotional distress. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.
Thumbnail Image

Drica Moraes alerta sobre golpe com uso de inteligência artificial após ter celular clonado

2026-04-06
Correio
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to clone the actress's voice to send fraudulent audio messages, which is a direct misuse of AI technology causing harm to individuals targeted by the scam. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial scams and deception).
Thumbnail Image

Criminosos recriam voz de Drica Moraes com inteligência artificial para aplicar golpes: 'Não caiam nessa'

2026-04-06
Revista Marie Claire Brasil
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice for the purpose of scamming people constitutes an AI Incident because the AI system's use directly leads to harm (fraud, deception, potential financial loss) to individuals. The event involves the malicious use of AI-generated content causing realized harm, fitting the definition of an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Drica Moraes tem celular clonado e seus dados são usados em golpe: 'Não caiam nessa'

2026-04-06
Extra Online
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to generate voice messages in the scam. The cloning of the phone and the AI-generated messages have directly led to harm by enabling fraudulent communication and misuse of personal data, which constitutes harm to individuals. Therefore, this qualifies as an AI Incident due to the realized harm caused by the malicious use of AI-generated content in a scam.
Thumbnail Image

Drica Moraes faz alerta após ter telefone clonado: 'Não caiam nessa'

2026-04-06
Home
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate voice messages impersonating Drica Moraes, which were then used in a scam involving phone cloning. This AI-enabled impersonation directly led to harm by deceiving people and exposing them to fraud attempts. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to people through fraudulent activity.
Thumbnail Image

Atriz Drica Moraes é vítima de golpe com uso de IA e alerta: "Não caiam nessa

2026-04-07
Portal Leo Dias
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to clone the victim's voice, enabling criminals to impersonate her and deceive her contacts. The AI system's use directly led to harm through fraudulent communication and potential financial or privacy damage to the victims. The harm is realized and ongoing, as the scam is actively targeting people. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Atriz da Globo tem celular clonado e alerta sobre golpe | A TARDE

2026-04-07
Portal A TARDE
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to generate voice messages impersonating the actress, which is a direct misuse of AI technology. The cloning of the phone and the AI-generated audio messages have directly led to harm by enabling a scam that deceives people, causing potential financial and emotional harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (fraud and deception).
Thumbnail Image

Bandidos recriam voz de Drica Moraes para aplicar golpes

2026-04-07
SRzd
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to clone the voice of a person to conduct fraudulent activities, which directly leads to harm (financial scams and emotional distress). The AI system's use is malicious and instrumental in the scam, fulfilling the criteria for an AI Incident due to realized harm to individuals (harm to persons and communities).
Thumbnail Image

Atriz Drica Moraes tem voz clonada por IA e alerta sobre o golpe

2026-04-07
VTV News | Notícias do Interior e Litoral de São Paulo
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the actress's voice and impersonate her directly led to harm by enabling a scam that affected her contacts, causing distress and potential financial or informational harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to individuals targeted by the scam.
Thumbnail Image

Estrelando

2026-04-06
ESTRELANDO
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the actress's voice and produce fake audio messages directly contributed to a scam attempt, which constitutes harm to people (victims of fraud attempts). The AI system's use here is malicious and directly linked to the harm caused. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated voice cloning facilitating criminal activity.
Thumbnail Image

Drica Moraes tem celular clonado e alerta para golpe com uso de inteligência artificial

2026-04-06
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate the actress's voice to trick her contacts, which is a direct misuse of AI technology leading to harm by enabling fraud and potential financial or emotional damage to victims. This fits the definition of an AI Incident as the AI system's use has directly led to harm through deception and fraud.
Thumbnail Image

Atriz Drica Moraes alerta sobre golpe após ter celular clonado

2026-04-07
Portal R7
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated voice to impersonate the actress in fraudulent messages, which are actively being sent to her contacts to commit scams. This is a clear example of an AI system's use leading directly to harm (fraud, deception, potential financial loss, and emotional distress). Therefore, it meets the criteria for an AI Incident as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

Drica Moraes é vitima de criminosos e denuncia IA que clona voz

2026-04-06
GazetaWeb * Pioneiro e Líder em Notícias On-line em Maceió e Alagoas.
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to clone the actress's voice, which was then used maliciously by criminals to impersonate her and attempt financial scams. This constitutes direct misuse of an AI system leading to harm (attempted fraud and deception). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to people (attempted financial harm and deception).
Thumbnail Image

Atriz Drica Moraes alerta sobre golpe após celular ser clonado

2026-04-07
Portal Tela
Why's our monitor labelling this an incident or hazard?
The use of AI-generated voice to impersonate the victim and send fraudulent messages directly leads to harm by enabling scams and deception. The AI system's misuse is central to the incident, causing harm to individuals (fraud victims) and communities (trust erosion). Therefore, this qualifies as an AI Incident due to realized harm facilitated by AI misuse.
Thumbnail Image

Drica Moraes é vítima de golpe virtual e denuncia bandidos | CNN Brasil

2026-04-07
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate synthetic voice and image content to impersonate a person for fraudulent purposes. This use of AI directly led to harm by enabling scams that deceive victims into sending money, which constitutes harm to people. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through malicious impersonation and fraud.
Thumbnail Image

Bandidos roubam celular e aplicam golpe com voz clonada da atriz Drica Moraes

2026-04-08
Record
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the actress's voice and send false messages constitutes the use of an AI system in a way that directly led to harm (fraud and emotional harm) to individuals. The event involves the malicious use of AI-generated content causing realized harm to people, fitting the definition of an AI Incident under violations of rights and harm to communities.