
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Criminal organizations in Italy are using AI-driven chatbots on WhatsApp and Telegram to simulate realistic conversations, build trust, and deceive users into making fake investments. These scams, flagged by Codacons, have led to significant financial losses as AI systems manage thousands of simultaneous fraudulent interactions.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots managing thousands of conversations simultaneously and adapting responses naturally to deceive victims. The use of AI in this fraudulent activity directly leads to harm to people through financial loss, which fits the definition of an AI Incident. The harm is realized, not just potential, as victims are scammed out of money. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing harm to individuals.[AI generated]