
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Criminals are increasingly using AI tools—such as generative models, deepfakes, and voice cloning—to create highly convincing romance scams on social media and dating apps. These AI-powered scams have led to significant financial and emotional harm, with losses exceeding $1 billion globally, including in Australia and the US.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, voice cloning, deepfake videos) being used to perpetrate romance scams that have caused real financial and emotional harm to victims. The harm is direct and materialized, not hypothetical or potential. The AI systems' development and use have enabled scammers to scale and improve the effectiveness of their fraudulent activities, leading to violations of personal and financial security. Hence, this event qualifies as an AI Incident under the framework.[AI generated]