AI-Generated Voice Used in Scam Targeting Drica Moraes' Contacts

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals cloned Brazilian actress Drica Moraes' phone and used AI to generate fake voice messages, impersonating her to scam her contacts via WhatsApp. The AI-enabled impersonation led to fraudulent requests for money and personal information, prompting Moraes to publicly warn her followers about the ongoing scam.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to generate a fake voice message impersonating a person constitutes the use of an AI system in a malicious way that directly leads to harm (fraud, deception) to individuals (friends and family of the victim). The cloning of the phone and the AI-generated voice message together caused realized harm through attempted fraud and emotional distress. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Digital security

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Drica Moraes denuncia ter sido vítima de golpe com IA: "Não caiam nessa"; entenda - Revista Fórum

2026-04-06
Revista Fórum
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice message impersonating a person constitutes the use of an AI system in a malicious way that directly leads to harm (fraud, deception) to individuals (friends and family of the victim). The cloning of the phone and the AI-generated voice message together caused realized harm through attempted fraud and emotional distress. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.
Thumbnail Image

Drica Moraes alerta sobre golpe com uso de inteligência artificial após ter celular clonado

2026-04-06
Correio
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to clone the actress's voice to send fraudulent audio messages, which is a direct misuse of AI technology causing harm to individuals targeted by the scam. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial scams and deception).
Thumbnail Image

Criminosos recriam voz de Drica Moraes com inteligência artificial para aplicar golpes: 'Não caiam nessa'

2026-04-06
Revista Marie Claire Brasil
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice for the purpose of scamming people constitutes an AI Incident because the AI system's use directly leads to harm (fraud, deception, potential financial loss) to individuals. The event involves the malicious use of AI-generated content causing realized harm, fitting the definition of an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Drica Moraes tem celular clonado e seus dados são usados em golpe: 'Não caiam nessa'

2026-04-06
Extra Online
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to generate voice messages in the scam. The cloning of the phone and the AI-generated messages have directly led to harm by enabling fraudulent communication and misuse of personal data, which constitutes harm to individuals. Therefore, this qualifies as an AI Incident due to the realized harm caused by the malicious use of AI-generated content in a scam.
Thumbnail Image

Drica Moraes faz alerta após ter telefone clonado: 'Não caiam nessa'

2026-04-06
Home
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate voice messages impersonating Drica Moraes, which were then used in a scam involving phone cloning. This AI-enabled impersonation directly led to harm by deceiving people and exposing them to fraud attempts. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to people through fraudulent activity.
Thumbnail Image

Atriz Drica Moraes é vítima de golpe com uso de IA e alerta: "Não caiam nessa

2026-04-07
Portal Leo Dias
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to clone the victim's voice, enabling criminals to impersonate her and deceive her contacts. The AI system's use directly led to harm through fraudulent communication and potential financial or privacy damage to the victims. The harm is realized and ongoing, as the scam is actively targeting people. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.