AI-Enabled Financial Scams Cause €20 Million Losses in Croatia, Targeting Youth

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Fraudsters in Croatia used AI tools and deepfake technology to conduct sophisticated financial scams, resulting in over €20 million in losses. Young people, especially those experiencing loneliness and social anxiety, were particularly vulnerable to emotional manipulation and deception enabled by these AI systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (AI tools and deepfake technology) by fraudsters to perpetrate financial scams that have directly led to significant financial losses and emotional manipulation, especially among young people. This fits the definition of an AI Incident because the AI system's use has directly caused harm (financial loss and emotional harm). Although the article also covers responses and prevention efforts, the core event is the realized harm from AI-enabled frauds, not just potential or complementary information.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Financial and insurance services

Affected stakeholders
General public

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Hrvati izgubili 20 milijuna eura u financijskim prijevarama: "Mladi su zbog usamljenosti više podložni manipulaciji"

2026-03-19
Poslovni dnevnik
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI tools and deepfake technology) by fraudsters to perpetrate financial scams that have directly led to significant financial losses and emotional manipulation, especially among young people. This fits the definition of an AI Incident because the AI system's use has directly caused harm (financial loss and emotional harm). Although the article also covers responses and prevention efforts, the core event is the realized harm from AI-enabled frauds, not just potential or complementary information.
Thumbnail Image

Hrvati izgubili 20 milijuna eura u financijskim prijevarama, mladi posebno ranjivi

2026-03-20
Jutarnji list
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and deepfake technologies in ongoing financial frauds that have already caused substantial financial harm to individuals and groups, particularly young people. The AI systems are used maliciously to imitate identities and deceive victims, directly leading to financial losses and psychological harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial and psychological) to persons and communities. The article also discusses responses and preventive measures, but the primary focus is on the realized harm caused by AI-enabled fraud.
Thumbnail Image

Hrvati izgubili 20 milijuna eura u financijskim prijevarama: "Mladi su zbog usamljenosti više podložni manipulaciji"

2026-03-19
after5
Why's our monitor labelling this an incident or hazard?
While AI tools and deepfake technology are mentioned as part of the methods used by fraudsters, the article does not report a concrete AI Incident or a specific event of harm caused by AI. It rather highlights a general risk environment and the potential for harm, without detailing an actual AI-driven incident or a near miss. Therefore, this is best classified as Complementary Information, as it provides context and awareness about AI-related risks in financial fraud but does not document a direct or indirect AI Incident or a plausible AI Hazard event.
Thumbnail Image

Hrvati izgubili 20 milijuna eura u financijskim prijevarama: Mladi su zbog usamljenosti više podložni manipulaciji

2026-03-19
Forbes Hrvatska
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools and deepfake technology by fraudsters to conduct sophisticated financial scams that have caused substantial monetary losses. The harm is realized and significant, affecting young people who are emotionally vulnerable. The AI systems' role is pivotal in enabling these manipulations and frauds, fulfilling the criteria for an AI Incident. The event involves the use and misuse of AI systems leading directly to harm, not just a potential risk or complementary information.