
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Criminals cloned Brazilian actress Drica Moraes' phone and used AI to generate fake voice messages, impersonating her to scam her contacts via WhatsApp. The AI-enabled impersonation led to fraudulent requests for money and personal information, prompting Moraes to publicly warn her followers about the ongoing scam.[AI generated]
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice message impersonating a person constitutes the use of an AI system in a malicious way that directly leads to harm (fraud, deception) to individuals (friends and family of the victim). The cloning of the phone and the AI-generated voice message together caused realized harm through attempted fraud and emotional distress. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.[AI generated]